title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 12. Change built-in pages
Chapter 12. Change built-in pages By the end of this section, you will be able to modify, and configure the visibility of any element in the system-generated pages. Some elements generated by the system are not possible to modify from the Developer Portal, such as the Signup, Dashboard, and Account pages. This guide shows how to customize with CSS and JavaScript the content on these pages. The system-generated pages follow backend rules for access and visibility so their URLs must be specific and hard coded values. A Knowledge Base article in the customer portal provides a list of system-generated pages and their URLs . Caution The 3scale system-generated pages are subject to change, although infrequently. These changes may break any customizations that you implement following this guide. If you can avoid using these hacks, please do so. Before you continue, be sure that you can monitor for any disruptive changes, and do the necessary maintenance work to keep your portal functioning. 12.1. Identify the elements The first and most important thing to do is identify what you want to hide. To do that, use Firebug (or any other developer tools such as Chrome Developer tools or Opera Dragonfly). Choose the desired element, and in the console, right click on it and select Copy CSS path. This way you save the exact CSS path to make it easy to manipulate. Remember, if the element is a part of the sidebar navigation widget, you will also have to specify which position in the list. For this, you can use either the "+" selector (for example, to choose 3rd li element: ul + li + li + li) or the :nth-child(n) CSS3 pseudoclass. 12.2. Modify or hide the elements Now, having identified the elements, you can change their display settings. Depending on the type of element, you can choose from two possible methods: CSS manipulation or jQuery script. CSS manipulation is more lightweight and reliable, but does not work well for some kinds of elements that exist on a number of pages, for example, the 3rd element in the sidebar of the Admin Portal's Dashboard also exists in the Account section but has a different value. Some trickier implementations require use of CSS3 which is not supported by old browsers. In the two steps, you will see both of these approaches. 12.3. Option A: CSS As an example, try to hide the latest element from the Dashboard page. Following the first step, you have identified its CSS path as: Keep in mind that it is the second box with the same path, so you will use the "+" selector. Your path will now look like this: .main_layout #three-scale .dashboard_bubble + .dashboard_bubble /* or */ .main_layout #three-scale .dashboard_bubble:nth-child(1) Changing display property to none makes that box invisible: .main_layout #three-scale .dashboard_bubble:nth-child(1) { display: none; } 12.4. Option B: jQuery If you have a trickier element to hide such as a sidebar menu element, it is better to use some jQuery. The CSS path of these elements is identical on the Dashboard and Account sections, and you do not want to hide elements in both sections. So choose the element based on the CSS path and the content. In this example, assume you want to hide the messages section from the Dashboard's sidebar. Your CSS path is: In order to match the content, use the .text() function. Also, include the code inside the document's head and inside the ready function, so it is run after all the content has been generated. The resulting code snippet will look like this: USD(function() { USD('#three-scale #submenu li a').each(function() { if (USD(this).text() == "Messages") USD(this).parent().css('display', 'none'); }); }); This is not the only solution. It just shows one possible way of doing it. The same example could be done using pure CSS with CSS3 selectors basing on the attributes values. For the complete CSS3 selectors specification, take a look here .
[ "#three-scale .dashboard_bubble", ".main_layout #three-scale .dashboard_bubble + .dashboard_bubble /* or */ .main_layout #three-scale .dashboard_bubble:nth-child(1)", ".main_layout #three-scale .dashboard_bubble:nth-child(1) { display: none; }", "#three-scale #submenu li a", "USD(function() { USD('#three-scale #submenu li a').each(function() { if (USD(this).text() == \"Messages\") USD(this).parent().css('display', 'none'); }); });" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/creating_the_developer_portal/change-built-in-pages
Chapter 33. Security
Chapter 33. Security When firewalld starts, net.netfilter.nf_conntrack_max is no longer reset to default if its configuration exists Previously, firewalld reset the nf_conntrack settings to their default values when it was started or restarted. As a consequence, the net.netfilter.nf_conntrack_max setting was restored to its default value. With this update, each time firewalld starts, it reloads nf_conntrack sysctls as they are configured in /etc/sysctl.conf and /etc/sysctl.d . As a result, net.netfilter.nf_conntrack_max maintains the user-configured value. (BZ#1462977) Tomcat can now be started using tomcat-jsvc with SELinux in enforcing mode In Red Hat Enterprise Linux 7.4, the tomcat_t unconfined domain was not correctly defined in the SELinux policy. Consequently, the Tomcat server cannot be started by the tomcat-jsvc service with SELinux in enforcing mode. This update allows the tomcat_t domain to use the dac_override , setuid , and kill capability rules. As a result, Tomcat is now able to start through tomcat-jsvc with SELinux in enforcing mode. (BZ# 1470735 ) SELinux now allows vdsm to communicate with lldpad Prior to this update, SELinux in enforcing mode denied the vdsm daemon to access lldpad information. Consequently, vdsm was not able to work correctly. With this update, a rule to allow a virtd_t domain to send data to a lldpad_t domain through the dgram socket has been added to the selinux-policy packages. As a result, vdsm labeled as virtd_t can now communicate with lldpad labeled as lldpad_t if SELinux is set to enforcing mode. (BZ# 1472722 ) OpenSSH servers without Privilege Separation no longer crash Prior to this update, a pointer had been dereferenced before its validity was checked. Consequently, OpenSSH servers with the Privilege Separation option turned off crashed during the session cleanup. With this update, pointers are checked properly, and OpenSSH servers no longer crash while running without Privilege Separation due the described bug. Note that disabling OpenSSH Privilege Separation is not recommended. (BZ# 1488083 ) The clevis luks bind command no longer fails with the DISA STIG-compliant password policy Previously, passwords generated as part of the clevis luks bind command were not compliant with the Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) password policy set in the pwquality.conf file. Consequently, clevis luks bind failed on DISA STIG-compliant systems in certain cases. With this update, passwords are generated using a utility designed to generate random passwords that pass the password policy, and clevis luks bind now succeeds in the described scenario. (BZ# 1500975 ) WinSCP 5.10 now works properly with OpenSSH Previously, OpenSSH incorrectly recognized WinSCP version 5.10 as older version 5.1. As a consequence, the compatibility bits for WinSCP version 5.1 were enabled for WinSCP 5.10, and the newer version did not work properly with OpenSSH . With this update, the version selectors have been fixed, and WinSCP 5.10 now works properly with OpenSSH servers. (BZ# 1496808 ) SFTP no longer allows to create zero-length files in read-only mode Prior to this update, the process_open function in the OpenSSH SFTP server did not properly prevent write operations in read-only mode. Consequently, attackers were allowed to create zero-length files. With this update, the function has been fixed, and the SFTP server no longer allows any file creation in read-only mode. (BZ#1517226)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/bug_fixes_security
Chapter 19. Authentication and Interoperability
Chapter 19. Authentication and Interoperability Use of AD and LDAP sudo providers The Active Directory (AD) provider is a back end used to connect to an AD server. In Red Hat Enterprise Linux 7.2, using the AD sudo provider together with the LDAP provider is supported as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the [domain] section of the sssd.conf file. DNSSEC available as Technology Preview in Identity Management Identity Management servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on Identity Management servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that Identity Management servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices described in the Red Hat Enterprise Linux Networking Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/ch-Configure_Host_Names.html#sec-Recommended_Naming_Practices . Nunc Stans event framework available for Directory Server A new Nunc Stans event framework to handle multiple simultaneous connections has been added as Technology Preview. The framework allows supporting several thousand active connections with no performance degradation. It is disabled by default. Browser for the JSON-RPC API in IdM is available This update implements a browser for the JSON-RPC API in Identity Management. The browser can be used to view the API. Note that this feature is experimental and the API is not yet supported. New packages: ipsilon The ipsilon packages provide the Ipsilon identity provider service for federated single sign-on (SSO). Ipsilon links authentication providers and applications or utilities to allow for SSO. It includes a server and utilities to configure Apache-based service providers. The Ipsilon server and toolkit is designed to configure Apache-based identity Service Providers. The server is a pluggable self-contained mod_wsgi application that provides federated SSO to web applications. Ipsilon is introduced in this release as a Technology Preview. Customers are advised not to consider integration of this service for production environments at this time.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/technology-preview-authentication_and_interoperability
Chapter 275. Ref Component
Chapter 275. Ref Component Available as of Camel version 1.2 The ref: component is used for lookup of existing endpoints bound in the Registry. 275.1. URI format Where someName is the name of an endpoint in the Registry (usually, but not always, the Spring registry). If you are using the Spring registry, someName would be the bean ID of an endpoint in the Spring registry. 275.2. Ref Options The Ref component has no options. The Ref endpoint is configured using URI syntax: with the following path and query parameters: 275.2.1. Path Parameters (1 parameters): Name Description Default Type name Required Name of endpoint to lookup in the registry. String 275.2.2. Query Parameters (4 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN/ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN/ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the default exchange pattern when creating an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 275.3. Runtime lookup This component can be used when you need dynamic discovery of endpoints in the Registry where you can compute the URI at runtime. Then you can look up the endpoint using the following code: // lookup the endpoint String myEndpointRef = "bigspenderOrder"; Endpoint endpoint = context.getEndpoint("ref:" + myEndpointRef); Producer producer = endpoint.createProducer(); Exchange exchange = producer.createExchange(); exchange.getIn().setBody(payloadToSend); // send the exchange producer.process(exchange); And you could have a list of endpoints defined in the Registry such as: <camelContext id="camel" xmlns="http://activemq.apache.org/camel/schema/spring"> <endpoint id="normalOrder" uri="activemq:order.slow"/> <endpoint id="bigspenderOrder" uri="activemq:order.high"/> </camelContext> 275.4. Sample In the sample below we use the ref: in the URI to reference the endpoint with the spring ID, endpoint2 : You could, of course, have used the ref attribute instead: <to ref="endpoint2"/> Which is the more common way to write it.
[ "ref:someName[?options]", "ref:name", "// lookup the endpoint String myEndpointRef = \"bigspenderOrder\"; Endpoint endpoint = context.getEndpoint(\"ref:\" + myEndpointRef); Producer producer = endpoint.createProducer(); Exchange exchange = producer.createExchange(); exchange.getIn().setBody(payloadToSend); // send the exchange producer.process(exchange);", "<camelContext id=\"camel\" xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <endpoint id=\"normalOrder\" uri=\"activemq:order.slow\"/> <endpoint id=\"bigspenderOrder\" uri=\"activemq:order.high\"/> </camelContext>", "<to ref=\"endpoint2\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ref-component
Chapter 16. Compiler Toolsets documentation
Chapter 16. Compiler Toolsets documentation The descriptions of the three compiler toolsets: LLVM Toolset Go Toolset Rust Toolset have been moved to a separate documentation set under Red Hat Developer Tools .
null
https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/compiler_toolsets_documentation
Chapter 36. Configuring the environment mode in KIE Server and Business Central
Chapter 36. Configuring the environment mode in KIE Server and Business Central You can set KIE Server to run in production mode or in development mode. Development mode provides a flexible deployment policy that enables you to update existing deployment units (KIE containers) while maintaining active process instances for small changes. It also enables you to reset the deployment unit state before updating active process instances for larger changes. Production mode is optimal for production environments, where each deployment creates a new deployment unit. In a development environment, you can click Deploy in Business Central to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The time you deploy or redeploy the built KJAR, the deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option in Business Central is disabled and you can click only Deploy to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server. Procedure To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production . To configure the deployment behavior for a project in Business Central, go to project Settings General Settings Version and toggle the Development Mode option. Note By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/configuring-environment-mode-proc_configuring-central
19.5. Mail User Agents
19.5. Mail User Agents Red Hat Enterprise Linux offers a variety of email programs, both, graphical email client programs, such as Evolution , and text-based email programs such as mutt . The remainder of this section focuses on securing communication between a client and a server. 19.5.1. Securing Communication Popular MUAs included with Red Hat Enterprise Linux, such as Evolution and Mutt offer SSL-encrypted email sessions. Like any other service that flows over a network unencrypted, important email information, such as user names, passwords, and entire messages, may be intercepted and viewed by users on the network. Additionally, since the standard POP and IMAP protocols pass authentication information unencrypted, it is possible for an attacker to gain access to user accounts by collecting user names and passwords as they are passed over the network. 19.5.1.1. Secure Email Clients Most Linux MUAs designed to check email on remote servers support SSL encryption. To use SSL when retrieving email, it must be enabled on both the email client and the server. SSL is easy to enable on the client-side, often done with the click of a button in the MUA's configuration window or via an option in the MUA's configuration file. Secure IMAP and POP have known port numbers ( 993 and 995 , respectively) that the MUA uses to authenticate and download messages. 19.5.1.2. Securing Email Client Communications Offering SSL encryption to IMAP and POP users on the email server is a simple matter. First, create an SSL certificate. This can be done in two ways: by applying to a Certificate Authority ( CA ) for an SSL certificate or by creating a self-signed certificate. Warning Self-signed certificates should be used for testing purposes only. Any server used in a production environment should use an SSL certificate granted by a CA. To create a self-signed SSL certificate for IMAP or POP , change to the /etc/pki/dovecot/ directory, edit the certificate parameters in the /etc/pki/dovecot/dovecot-openssl.cnf configuration file as you prefer, and type the following commands, as root : Once finished, make sure you have the following configurations in your /etc/dovecot/conf.d/10-ssl.conf file: Execute the service dovecot restart command to restart the dovecot daemon. Alternatively, the stunnel command can be used as an encryption wrapper around the standard, non-secure connections to IMAP or POP services. The stunnel utility uses external OpenSSL libraries included with Red Hat Enterprise Linux to provide strong cryptography and to protect the network connections. It is recommended to apply to a CA to obtain an SSL certificate, but it is also possible to create a self-signed certificate. See Using stunnel in the Red Hat Enterprise Linux 6 Security Guide for instructions on how to install stunnel and create its basic configuration. To configure stunnel as a wrapper for IMAPS and POP3S , add the following lines to the /etc/stunnel/stunnel.conf configuration file: The Security Guide also explains how to start and stop stunnel . Once you start it, it is possible to use an IMAP or a POP email client and connect to the email server using SSL encryption.
[ "dovecot]# rm -f certs/dovecot.pem private/dovecot.pem dovecot]# /usr/libexec/dovecot/mkcert.sh", "ssl_cert = </etc/pki/dovecot/certs/dovecot.pem ssl_key = </etc/pki/dovecot/private/dovecot.pem", "[pop3s] accept = 995 connect = 110 [imaps] accept = 993 connect = 143" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-email-mua
4.3. Controlling Access to luci
4.3. Controlling Access to luci Since the initial release of Red Hat Enterprise Linux 6, the following features have been added to the Users and Permisions page. As of Red Hat Enterprise Linux 6.2, the root user or a user who has been granted luci administrator permissions on a system running luci can control access to the various luci components by setting permissions for the individual users on a system. As of Red Hat Enterprise Linux 6.3, the root user or a user who has been granted luci administrator permissions can add users to the luci interface and then set the user permissions for that user. You will still need to add that user to the system and set up a password for that user, but this feature allows you to configure permissions for the user before the user has logged in to luci for the first time. As of Red Hat Enterprise Linux 6.4, the root user or a user who has been granted luci administrator permissions can also use the luci interface to delete users from the luci interface, which resets any permissions you have configured for that user. Note You can modify the way in which luci performs authentication by editing the /etc/pam.d/luci file on the system. For information on using Linux-PAM, see the pam (8) man page. To add users, delete users, or set the user permissions, log in to luci as root or as a user who has previously been granted administrator permissions and click the Admin selection in the upper right corner of the luci screen. This brings up the Users and Permissions page, which displays the existing users. To add a user to the luci interface, click on Add a User and enter the name of the user to add. You can then set permissions for that user, although you will still need to set up a password for that user. To delete users from the luci interface, resetting any permissions you have configured for that user, select the user or users and click on Delete Selected . To set or change permissions for a user, select the user from the dropdown menu under User Permissions . This allows you to set the following permissions: Luci Administrator Grants the user the same permissions as the root user, with full permissions on all clusters and the ability to set or remove permissions on all other users except root, whose permissions cannot be restricted. Can Create Clusters Allows the user to create new clusters, as described in Section 4.4, "Creating a Cluster" . Can Import Existing Clusters Allows the user to add an existing cluster to the luci interface, as described in Section 5.1, "Adding an Existing Cluster to the luci Interface" . For each cluster that has been created or imported to luci , you can set the following permissions for the indicated user: Can View This Cluster Allows the user to view the specified cluster. Can Change the Cluster Configuration Allows the user to modify the configuration for the specified cluster, with the exception of adding and removing cluster nodes. Can Enable, Disable, Relocate, and Migrate Service Groups Allows the user to manage high-availability services, as described in Section 5.5, "Managing High-Availability Services" . Can Stop, Start, and Reboot Cluster Nodes Allows the user to manage the individual nodes of a cluster, as described in Section 5.3, "Managing Cluster Nodes" . Can Add and Delete Nodes Allows the user to add and delete nodes from a cluster, as described in Section 4.4, "Creating a Cluster" . Can Remove This Cluster from Luci Allows the user to remove a cluster from the luci interface, as described in Section 5.4, "Starting, Stopping, Restarting, and Deleting Clusters" . Click Submit for the permissions to take affect, or click Reset to return to the initial values.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-control-luci-access-conga-CA
Server Guide
Server Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
[ "bin/kc.[sh|bat] start --db-url-host=mykeycloakdb", "export KC_DB_URL_HOST=mykeycloakdb", "db-url-host=mykeycloakdb", "bin/kc.[sh|bat] start --help", "db-url-host=USD{MY_DB_HOST}", "db-url-host=USD{MY_DB_HOST:mydb}", "bin/kc.[sh|bat] --config-file=/path/to/myconfig.conf start", "keytool -importpass -alias kc.db-password -keystore keystore.p12 -storepass keystorepass -storetype PKCS12 -v", "bin/kc.[sh|bat] start --config-keystore=/path/to/keystore.p12 --config-keystore-password=storepass --config-keystore-type=PKCS12", "bin/kc.[sh|bat] start-dev", "bin/kc.[sh|bat] start", "bin/kc.[sh|bat] build <build-options>", "bin/kc.[sh|bat] build --help", "bin/kc.[sh|bat] build --db=postgres", "bin/kc.[sh|bat] start --optimized <configuration-options>", "bin/kc.[sh|bat] build --db=postgres", "db-url-host=keycloak-postgres db-username=keycloak db-password=change_me hostname=mykeycloak.acme.com https-certificate-file", "bin/kc.[sh|bat] start --optimized", "export JAVA_OPTS_APPEND=\"-Djava.net.preferIPv4Stack=true\"", "FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder Enable health and metrics support ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true Configure a database vendor ENV KC_DB=postgres WORKDIR /opt/keycloak for demonstration purposes only, please make sure to use proper certificates in production instead RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname \"CN=server\" -alias server -ext \"SAN:c=DNS:localhost,IP:127.0.0.1\" -keystore conf/server.keystore RUN /opt/keycloak/bin/kc.sh build FROM registry.redhat.io/rhbk/keycloak-rhel9:24 COPY --from=builder /opt/keycloak/ /opt/keycloak/ change these values to point to a running postgres instance ENV KC_DB=postgres ENV KC_DB_URL=<DBURL> ENV KC_DB_USERNAME=<DBUSERNAME> ENV KC_DB_PASSWORD=<DBPASSWORD> ENV KC_HOSTNAME=localhost ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]", "A example build step that downloads a JAR file from a URL and adds it to the providers directory FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder Add the provider JAR file to the providers directory ADD --chown=keycloak:keycloak --chmod=644 <MY_PROVIDER_JAR_URL> /opt/keycloak/providers/myprovider.jar Context: RUN the build command RUN /opt/keycloak/bin/kc.sh build", "FROM registry.access.redhat.com/ubi9 AS ubi-micro-build COPY mycertificate.crt /etc/pki/ca-trust/source/anchors/mycertificate.crt RUN update-ca-trust FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /etc/pki /etc/pki", "FROM registry.access.redhat.com/ubi9 AS ubi-micro-build RUN mkdir -p /mnt/rootfs RUN dnf install --installroot /mnt/rootfs <package names go here> --releasever 9 --setopt install_weak_deps=false --nodocs -y && dnf --installroot /mnt/rootfs clean all && rpm --root /mnt/rootfs -e --nodeps setup FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /mnt/rootfs /", "build . -t mykeycloak", "run --name mykeycloak -p 8443:8443 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me mykeycloak start --optimized", "run --name mykeycloak -p 3000:8443 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me mykeycloak start --optimized --hostname-port=3000", "run --name mykeycloak -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:24 start-dev", "run --name mykeycloak -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:24 start --db=postgres --features=token-exchange --db-url=<JDBC-URL> --db-username=<DB-USER> --db-password=<DB-PASSWORD> --https-key-store-file=<file> --https-key-store-password=<password>", "setting the admin username -e KEYCLOAK_ADMIN=<admin-user-name> setting the initial password -e KEYCLOAK_ADMIN_PASSWORD=change_me", "run --name keycloak_unoptimized -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me -v /path/to/realm/data:/opt/keycloak/data/import registry.redhat.io/rhbk/keycloak-rhel9:24 start-dev --import-realm", "run --name mykeycloak -p 8080:8080 -m 1g -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me -e JAVA_OPTS_KC_HEAP=\"-XX:MaxHeapFreeRatio=30 -XX:MaxRAMPercentage=65\" registry.redhat.io/rhbk/keycloak-rhel9:24 start-dev", "bin/kc.[sh|bat] start --https-certificate-file=/path/to/certfile.pem --https-certificate-key-file=/path/to/keyfile.pem", "bin/kc.[sh|bat] start --https-key-store-file=/path/to/existing-keystore-file", "bin/kc.[sh|bat] start --https-key-store-password=<value>", "bin/kc.[sh|bat] start --https-protocols=<protocol>[,<protocol>]", "bin/kc.[sh|bat] start --https-port=<port>", "bin/kc.[sh|bat] start --https-trust-store-file=/path/to/file", "bin/kc.[sh|bat] start --https-trust-store-password=<value>", "bin/kc.[sh|bat] start --https-client-auth=<none|request|required>", "bin/kc.[sh|bat] start --hostname=<host>", "bin/kc.[sh|bat] start --hostname-url=<scheme>://<host>:<port>/<path>", "bin/kc.[sh|bat] start --hostname=<value> --hostname-strict-backchannel=true", "bin/kc.[sh|bat] start --hostname-admin=<host>", "bin/kc.[sh|bat] start --hostname-admin-url=<scheme>://<host>:<port>/<path>", "bin/kc.[sh|bat] start --hostname=mykeycloak --http-enabled=true --proxy-headers=forwarded|xforwarded", "bin/kc.[sh|bat] start --hostname-url=https://mykeycloak", "bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-strict-backchannel=true", "bin/kc.[sh|bat] start --hostname-url=https://mykeycloak:8989", "bin/kc.[sh|bat] start --proxy-headers=forwarded --https-port=8543 --hostname-url=https://my-keycloak.org:8443 --hostname-admin-url=https://admin.my-keycloak.org:8443", "bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true", "bin/kc.[sh|bat] start --proxy-headers forwarded", "bin/kc.[sh|bat] start --proxy <mode>", "bin/kc.[sh|bat] start --proxy-headers=forwarded|xforwarded --hostname-strict=false", "bin/kc.[sh|bat] start --spi-sticky-session-encoder-infinispan-should-attach-route=false", "bin/kc.[sh|bat] build --spi-x509cert-lookup-provider=<provider>", "bin/kc.[sh|bat] start --spi-x509cert-lookup-<provider>-ssl-client-cert=SSL_CLIENT_CERT --spi-x509cert-lookup-<provider>-ssl-cert-chain-prefix=CERT_CHAIN --spi-x509cert-lookup-<provider>-certificate-chain-length=10", "FROM registry.redhat.io/rhbk/keycloak-rhel9:24 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc11/23.3.0.23.09/ojdbc11-23.3.0.23.09.jar /opt/keycloak/providers/ojdbc11.jar ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/nls/orai18n/23.3.0.23.09/orai18n-23.3.0.23.09.jar /opt/keycloak/providers/orai18n.jar Setting the build parameter for the database: ENV KC_DB=oracle Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build", "FROM registry.redhat.io/rhbk/keycloak-rhel9:24 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/12.4.2.jre11/mssql-jdbc-12.4.2.jre11.jar /opt/keycloak/providers/mssql-jdbc.jar Setting the build parameter for the database: ENV KC_DB=mssql Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build", "bin/kc.[sh|bat] start --db postgres --db-url-host mypostgres --db-username myuser --db-password change_me", "bin/kc.[sh|bat] start --db postgres --db-url jdbc:postgresql://mypostgres/mydatabase", "bin/kc.[sh|bat] start --db postgres --db-driver=my.Driver", "show server_encoding;", "create database keycloak with encoding 'UTF8';", "FROM registry.redhat.io/rhbk/keycloak-rhel9:24 ADD --chmod=0666 https://github.com/awslabs/aws-advanced-jdbc-wrapper/releases/download/2.3.1/aws-advanced-jdbc-wrapper-2.3.1.jar /opt/keycloak/providers/aws-advanced-jdbc-wrapper.jar", "bin/kc.[sh|bat] start --spi-dblock-jpa-lock-wait-timeout 900", "bin/kc.[sh|bat] build --db=<vendor> --transaction-xa-enabled=false", "bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-strategy=manual", "bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-initialize-empty=false", "bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-export=<path>/<file.sql>", "bin/kc.[sh|bat] build --cache=ispn", "bin/kc.[sh|bat] build --cache-config-file=my-cache-file.xml", "bin/kc.[sh|bat] build --cache-stack=<stack>", "bin/kc.[sh|bat] build --cache-stack=<ec2|google|azure>", "<jgroups> <stack name=\"my-encrypt-udp\" extends=\"udp\"> <SSL_KEY_EXCHANGE keystore_name=\"server.jks\" keystore_password=\"password\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> <ASYM_ENCRYPT asym_keylength=\"2048\" asym_algorithm=\"RSA\" change_key_on_coord_leave = \"false\" change_key_on_leave = \"false\" use_external_key_exchange = \"true\" stack.combine=\"INSERT_BEFORE\" stack.position=\"pbcast.NAKACK2\"/> </stack> </jgroups> <cache-container name=\"keycloak\"> <transport lock-timeout=\"60000\" stack=\"my-encrypt-udp\"/> </cache-container>", "<cache-container name=\"keycloak\" statistics=\"true\"> </cache-container>", "<local-cache name=\"realms\" statistics=\"true\"> </local-cache>", "bin/kc.[sh|bat] start --spi-connections-http-client-default-<configurationoption>=<value>", "HTTPS_PROXY=https://www-proxy.acme.com:8080 NO_PROXY=google.com,login.facebook.com", ".*\\.(google|googleapis)\\.com", "bin/kc.[sh|bat] start --spi-connections-http-client-default-proxy-mappings=\"'*\\\\\\.(google|googleapis)\\\\\\.com;http://www-proxy.acme.com:8080'\"", ".*\\.(google|googleapis)\\.com;http://proxyuser:[email protected]:8080", "All requests to Google APIs use http://www-proxy.acme.com:8080 as proxy .*\\.(google|googleapis)\\.com;http://www-proxy.acme.com:8080 All requests to internal systems use no proxy .*\\.acme\\.com;NO_PROXY All other requests use http://fallback:8080 as proxy .*;http://fallback:8080", "bin/kc.[sh|bat] start --truststore-paths=/opt/truststore/myTrustStore.pfx,/opt/other-truststore/myOtherTrustStore.pem", "bin/kc.[sh|bat] build --features=\"<name>[,<name>]\"", "bin/kc.[sh|bat] build --features=\"docker,token-exchange\"", "bin/kc.[sh|bat] build --features=\"preview\"", "bin/kc.[sh|bat] build --features-disabled=\"<name>[,<name>]\"", "bin/kc.[sh|bat] build --features-disabled=\"impersonation\"", "spi-<spi-id>-<provider-id>-<property>=<value>", "spi-connections-http-client-default-connection-pool-size=10", "bin/kc.[sh|bat] start --spi-connections-http-client-default-connection-pool-size=10", "bin/kc.[sh|bat] build --spi-email-template-provider=mycustomprovider", "bin/kc.[sh|bat] build --spi-email-template-mycustomprovider-enabled=true", "bin/kc.[sh|bat] start --log-level=<root-level>", "bin/kc.[sh|bat] start --log-level=\"<root-level>,<org.category1>:<org.category1-level>\"", "bin/kc.[sh|bat] start --log-level=\"INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info\"", "bin/kc.[sh|bat] start --log=\"<handler1>,<handler2>\"", "bin/kc.[sh|bat] start --log-console-format=\"'<format>'\"", "bin/kc.[sh|bat] start --log-console-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"", "bin/kc.[sh|bat] start --log-console-output=json", "{\"timestamp\":\"2022-02-25T10:31:32.452+01:00\",\"sequence\":8442,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080\",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host-name\",\"processName\":\"QuarkusEntryPoint\",\"processId\":36946}", "bin/kc.[sh|bat] start --log-console-output=default", "2022-03-02 10:36:50,603 INFO [io.quarkus] (main) Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.615s. Listening on: http://0.0.0.0:8080", "bin/kc.[sh|bat] start --log-console-color=<false|true>", "bin/kc.[sh|bat] start --log=\"console,file\"", "bin/kc.[sh|bat] start --log=\"console,file\" --log-file=<path-to>/<your-file.log>", "bin/kc.[sh|bat] start --log-file-format=\"<pattern>\"", "fips-mode-setup --check", "fips-mode-setup --enable", "keytool -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -keystore USDKEYCLOAK_HOME/conf/server.keystore -alias localhost -dname CN=localhost -keypass passwordpassword", "securerandom.strongAlgorithms=PKCS11:SunPKCS11-NSS-FIPS", "keytool -keystore USDKEYCLOAK_HOME/conf/server.keystore -storetype bcfks -providername BCFIPS -providerclass org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -provider org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -providerpath USDKEYCLOAK_HOME/providers/bc-fips-*.jar -alias localhost -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -dname CN=localhost -keypass passwordpassword -J-Djava.security.properties=/tmp/kc.keystore-create.java.security", "bin/kc.[sh|bat] start --features=fips --hostname=localhost --https-key-store-password=passwordpassword --log-level=INFO,org.keycloak.common.crypto:TRACE,org.keycloak.crypto:TRACE", "KC(BCFIPS version 1.000203 Approved Mode, FIPS-JVM: enabled) version 1.0 - class org.keycloak.crypto.fips.KeycloakFipsSecurityProvider,", "--spi-password-hashing-pbkdf2-sha256-max-padding-length=14", "fips.provider.7=XMLDSig", "-Djava.security.properties=/location/to/your/file/kc.java.security", "cp USDKEYCLOAK_HOME/providers/bc-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/ cp USDKEYCLOAK_HOME/providers/bctls-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/", "echo \"keystore.type=bcfks fips.keystore.type=bcfks\" > /tmp/kcadm.java.security export KC_OPTS=\"-Djava.security.properties=/tmp/kcadm.java.security\"", "FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder ADD files /tmp/files/ WORKDIR /opt/keycloak RUN cp /tmp/files/*.jar /opt/keycloak/providers/ RUN cp /tmp/files/keycloak-fips.keystore.* /opt/keycloak/conf/server.keystore RUN cp /tmp/files/kc.java.security /opt/keycloak/conf/ RUN /opt/keycloak/bin/kc.sh build --features=fips --fips-mode=strict FROM registry.redhat.io/rhbk/keycloak-rhel9:24 COPY --from=builder /opt/keycloak/ /opt/keycloak/ ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]", "{ \"status\": \"UP\", \"checks\": [] }", "{ \"status\": \"UP\", \"checks\": [ { \"name\": \"Keycloak database connections health check\", \"status\": \"UP\" } ] }", "bin/kc.[sh|bat] build --health-enabled=true", "curl --head -fsS http://localhost:8080/health/ready", "bin/kc.[sh|bat] build --health-enabled=true --metrics-enabled=true", "bin/kc.[sh|bat] start --metrics-enabled=true", "HELP base_gc_total Displays the total number of collections that have occurred. This attribute lists -1 if the collection count is undefined for this collector. TYPE base_gc_total counter base_gc_total{name=\"G1 Young Generation\",} 14.0 HELP jvm_memory_usage_after_gc_percent The percentage of long-lived heap pool used after the last GC event, in the range [0..1] TYPE jvm_memory_usage_after_gc_percent gauge jvm_memory_usage_after_gc_percent{area=\"heap\",pool=\"long-lived\",} 0.0 HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset TYPE jvm_threads_peak_threads gauge jvm_threads_peak_threads 113.0 HELP agroal_active_count Number of active connections. These connections are in use and not available to be acquired. TYPE agroal_active_count gauge agroal_active_count{datasource=\"default\",} 0.0 HELP base_memory_maxHeap_bytes Displays the maximum amount of memory, in bytes, that can be used for memory management. TYPE base_memory_maxHeap_bytes gauge base_memory_maxHeap_bytes 1.6781410304E10 HELP process_start_time_seconds Start time of the process since unix epoch. TYPE process_start_time_seconds gauge process_start_time_seconds 1.675188449054E9 HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time TYPE system_load_average_1m gauge system_load_average_1m 4.005859375", "bin/kc.[sh|bat] export --help", "bin/kc.[sh|bat] export --dir <dir>", "bin/kc.[sh|bat] export --dir <dir> --users different_files --users-per-file 100", "bin/kc.[sh|bat] export --file <file>", "bin/kc.[sh|bat] export [--dir|--file] <path> --realm my-realm", "bin/kc.[sh|bat] import --help", "bin/kc.[sh|bat] import --dir <dir>", "bin/kc.[sh|bat] import --dir <dir> --override false", "bin/kc.[sh|bat] import --file <file>", "bin/kc.[sh|bat] start --import-realm", "{ \"realm\": \"USD{MY_REALM_NAME}\", \"enabled\": true, }", "bin/kc.[sh|bat] build --vault=file", "bin/kc.[sh|bat] build --vault=keystore", "bin/kc.[sh|bat] start --vault-dir=/my/path", "USD{vault.<realmname>_<secretname>}", "keytool -importpass -alias <realm-name>_<alias> -keystore keystore.p12 -storepass keystorepassword", "bin/kc.[sh|bat] start --vault-file=/path/to/keystore.p12 --vault-pass=<value> --vault-type=<value>", "sso__realm_ldap__credential" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/server_guide//db-
Chapter 11. PMML model examples
Chapter 11. PMML model examples PMML defines an XML schema that enables PMML models to be used between different PMML-compliant platforms. The PMML specification enables multiple software platforms to work with the same file for authoring, testing, and production execution, assuming producer and consumer conformance are met. The following are examples of PMML Regression, Scorecard, Tree, Mining, and Clustering models. These examples illustrate the supported models that you can integrate with your decision services in Red Hat Process Automation Manager. For more PMML examples, see the DMG PMML Sample Files page. Example PMML Regression model <PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2"> <Header copyright="JBoss"/> <DataDictionary numberOfFields="5"> <DataField dataType="double" name="fld1" optype="continuous"/> <DataField dataType="double" name="fld2" optype="continuous"/> <DataField dataType="string" name="fld3" optype="categorical"> <Value value="x"/> <Value value="y"/> </DataField> <DataField dataType="double" name="fld4" optype="continuous"/> <DataField dataType="double" name="fld5" optype="continuous"/> </DataDictionary> <RegressionModel algorithmName="linearRegression" functionName="regression" modelName="LinReg" normalizationMethod="logit" targetFieldName="fld4"> <MiningSchema> <MiningField name="fld1"/> <MiningField name="fld2"/> <MiningField name="fld3"/> <MiningField name="fld4" usageType="predicted"/> <MiningField name="fld5" usageType="target"/> </MiningSchema> <RegressionTable intercept="0.5"> <NumericPredictor coefficient="5" exponent="2" name="fld1"/> <NumericPredictor coefficient="2" exponent="1" name="fld2"/> <CategoricalPredictor coefficient="-3" name="fld3" value="x"/> <CategoricalPredictor coefficient="3" name="fld3" value="y"/> <PredictorTerm coefficient="0.4"> <FieldRef field="fld1"/> <FieldRef field="fld2"/> </PredictorTerm> </RegressionTable> </RegressionModel> </PMML> Example PMML Scorecard model <PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2"> <Header copyright="JBoss"/> <DataDictionary numberOfFields="4"> <DataField name="param1" optype="continuous" dataType="double"/> <DataField name="param2" optype="continuous" dataType="double"/> <DataField name="overallScore" optype="continuous" dataType="double" /> <DataField name="finalscore" optype="continuous" dataType="double" /> </DataDictionary> <Scorecard modelName="ScorecardCompoundPredicate" useReasonCodes="true" isScorable="true" functionName="regression" baselineScore="15" initialScore="0.8" reasonCodeAlgorithm="pointsAbove"> <MiningSchema> <MiningField name="param1" usageType="active" invalidValueTreatment="asMissing"> </MiningField> <MiningField name="param2" usageType="active" invalidValueTreatment="asMissing"> </MiningField> <MiningField name="overallScore" usageType="target"/> <MiningField name="finalscore" usageType="predicted"/> </MiningSchema> <Characteristics> <Characteristic name="ch1" baselineScore="50" reasonCode="reasonCh1"> <Attribute partialScore="20"> <SimplePredicate field="param1" operator="lessThan" value="20"/> </Attribute> <Attribute partialScore="100"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="param1" operator="greaterOrEqual" value="20"/> <SimplePredicate field="param2" operator="lessOrEqual" value="25"/> </CompoundPredicate> </Attribute> <Attribute partialScore="200"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="param1" operator="greaterOrEqual" value="20"/> <SimplePredicate field="param2" operator="greaterThan" value="25"/> </CompoundPredicate> </Attribute> </Characteristic> <Characteristic name="ch2" reasonCode="reasonCh2"> <Attribute partialScore="10"> <CompoundPredicate booleanOperator="or"> <SimplePredicate field="param2" operator="lessOrEqual" value="-5"/> <SimplePredicate field="param2" operator="greaterOrEqual" value="50"/> </CompoundPredicate> </Attribute> <Attribute partialScore="20"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="param2" operator="greaterThan" value="-5"/> <SimplePredicate field="param2" operator="lessThan" value="50"/> </CompoundPredicate> </Attribute> </Characteristic> </Characteristics> </Scorecard> </PMML> Example PMML Tree model <PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2"> <Header copyright="JBOSS"/> <DataDictionary numberOfFields="5"> <DataField dataType="double" name="fld1" optype="continuous"/> <DataField dataType="double" name="fld2" optype="continuous"/> <DataField dataType="string" name="fld3" optype="categorical"> <Value value="true"/> <Value value="false"/> </DataField> <DataField dataType="string" name="fld4" optype="categorical"> <Value value="optA"/> <Value value="optB"/> <Value value="optC"/> </DataField> <DataField dataType="string" name="fld5" optype="categorical"> <Value value="tgtX"/> <Value value="tgtY"/> <Value value="tgtZ"/> </DataField> </DataDictionary> <TreeModel functionName="classification" modelName="TreeTest"> <MiningSchema> <MiningField name="fld1"/> <MiningField name="fld2"/> <MiningField name="fld3"/> <MiningField name="fld4"/> <MiningField name="fld5" usageType="predicted"/> </MiningSchema> <Node score="tgtX"> <True/> <Node score="tgtX"> <SimplePredicate field="fld4" operator="equal" value="optA"/> <Node score="tgtX"> <CompoundPredicate booleanOperator="surrogate"> <SimplePredicate field="fld1" operator="lessThan" value="30.0"/> <SimplePredicate field="fld2" operator="greaterThan" value="20.0"/> </CompoundPredicate> <Node score="tgtX"> <SimplePredicate field="fld2" operator="lessThan" value="40.0"/> </Node> <Node score="tgtZ"> <SimplePredicate field="fld2" operator="greaterOrEqual" value="10.0"/> </Node> </Node> <Node score="tgtZ"> <CompoundPredicate booleanOperator="or"> <SimplePredicate field="fld1" operator="greaterOrEqual" value="60.0"/> <SimplePredicate field="fld1" operator="lessOrEqual" value="70.0"/> </CompoundPredicate> <Node score="tgtZ"> <SimpleSetPredicate booleanOperator="isNotIn" field="fld4"> <Array type="string">optA optB</Array> </SimpleSetPredicate> </Node> </Node> </Node> <Node score="tgtY"> <CompoundPredicate booleanOperator="or"> <SimplePredicate field="fld4" operator="equal" value="optA"/> <SimplePredicate field="fld4" operator="equal" value="optC"/> </CompoundPredicate> <Node score="tgtY"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="fld1" operator="greaterThan" value="10.0"/> <SimplePredicate field="fld1" operator="lessThan" value="50.0"/> <SimplePredicate field="fld4" operator="equal" value="optA"/> <SimplePredicate field="fld2" operator="lessThan" value="100.0"/> <SimplePredicate field="fld3" operator="equal" value="false"/> </CompoundPredicate> </Node> <Node score="tgtZ"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="fld4" operator="equal" value="optC"/> <SimplePredicate field="fld2" operator="lessThan" value="30.0"/> </CompoundPredicate> </Node> </Node> </Node> </TreeModel> </PMML> Example PMML Mining model (modelChain) <PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2"> <Header> <Application name="Drools-PMML" version="7.0.0-SNAPSHOT" /> </Header> <DataDictionary numberOfFields="7"> <DataField name="age" optype="continuous" dataType="double" /> <DataField name="occupation" optype="categorical" dataType="string"> <Value value="SKYDIVER" /> <Value value="ASTRONAUT" /> <Value value="PROGRAMMER" /> <Value value="TEACHER" /> <Value value="INSTRUCTOR" /> </DataField> <DataField name="residenceState" optype="categorical" dataType="string"> <Value value="AP" /> <Value value="KN" /> <Value value="TN" /> </DataField> <DataField name="validLicense" optype="categorical" dataType="boolean" /> <DataField name="overallScore" optype="continuous" dataType="double" /> <DataField name="grade" optype="categorical" dataType="string"> <Value value="A" /> <Value value="B" /> <Value value="C" /> <Value value="D" /> <Value value="F" /> </DataField> <DataField name="qualificationLevel" optype="categorical" dataType="string"> <Value value="Unqualified" /> <Value value="Barely" /> <Value value="Well" /> <Value value="Over" /> </DataField> </DataDictionary> <MiningModel modelName="SampleModelChainMine" functionName="classification"> <MiningSchema> <MiningField name="age" /> <MiningField name="occupation" /> <MiningField name="residenceState" /> <MiningField name="validLicense" /> <MiningField name="overallScore" /> <MiningField name="qualificationLevel" usageType="target"/> </MiningSchema> <Segmentation multipleModelMethod="modelChain"> <Segment id="1"> <True /> <Scorecard modelName="Sample Score 1" useReasonCodes="true" isScorable="true" functionName="regression" baselineScore="0.0" initialScore="0.345"> <MiningSchema> <MiningField name="age" usageType="active" invalidValueTreatment="asMissing" /> <MiningField name="occupation" usageType="active" invalidValueTreatment="asMissing" /> <MiningField name="residenceState" usageType="active" invalidValueTreatment="asMissing" /> <MiningField name="validLicense" usageType="active" invalidValueTreatment="asMissing" /> <MiningField name="overallScore" usageType="predicted" /> </MiningSchema> <Output> <OutputField name="calculatedScore" displayName="Final Score" dataType="double" feature="predictedValue" targetField="overallScore" /> </Output> <Characteristics> <Characteristic name="AgeScore" baselineScore="0.0" reasonCode="ABZ"> <Extension name="cellRef" value="USDBUSD8" /> <Attribute partialScore="10.0"> <Extension name="cellRef" value="USDCUSD10" /> <SimplePredicate field="age" operator="lessOrEqual" value="5" /> </Attribute> <Attribute partialScore="30.0" reasonCode="CX1"> <Extension name="cellRef" value="USDCUSD11" /> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="age" operator="greaterOrEqual" value="5" /> <SimplePredicate field="age" operator="lessThan" value="12" /> </CompoundPredicate> </Attribute> <Attribute partialScore="40.0" reasonCode="CX2"> <Extension name="cellRef" value="USDCUSD12" /> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="age" operator="greaterOrEqual" value="13" /> <SimplePredicate field="age" operator="lessThan" value="44" /> </CompoundPredicate> </Attribute> <Attribute partialScore="25.0"> <Extension name="cellRef" value="USDCUSD13" /> <SimplePredicate field="age" operator="greaterOrEqual" value="45" /> </Attribute> </Characteristic> <Characteristic name="OccupationScore" baselineScore="0.0"> <Extension name="cellRef" value="USDBUSD16" /> <Attribute partialScore="-10.0" reasonCode="CX2"> <Extension name="description" value="skydiving is a risky occupation" /> <Extension name="cellRef" value="USDCUSD18" /> <SimpleSetPredicate field="occupation" booleanOperator="isIn"> <Array n="2" type="string">SKYDIVER ASTRONAUT</Array> </SimpleSetPredicate> </Attribute> <Attribute partialScore="10.0"> <Extension name="cellRef" value="USDCUSD19" /> <SimpleSetPredicate field="occupation" booleanOperator="isIn"> <Array n="2" type="string">TEACHER INSTRUCTOR</Array> </SimpleSetPredicate> </Attribute> <Attribute partialScore="5.0"> <Extension name="cellRef" value="USDCUSD20" /> <SimplePredicate field="occupation" operator="equal" value="PROGRAMMER" /> </Attribute> </Characteristic> <Characteristic name="ResidenceStateScore" baselineScore="0.0" reasonCode="RES"> <Extension name="cellRef" value="USDBUSD22" /> <Attribute partialScore="-10.0"> <Extension name="cellRef" value="USDCUSD24" /> <SimplePredicate field="residenceState" operator="equal" value="AP" /> </Attribute> <Attribute partialScore="10.0"> <Extension name="cellRef" value="USDCUSD25" /> <SimplePredicate field="residenceState" operator="equal" value="KN" /> </Attribute> <Attribute partialScore="5.0"> <Extension name="cellRef" value="USDCUSD26" /> <SimplePredicate field="residenceState" operator="equal" value="TN" /> </Attribute> </Characteristic> <Characteristic name="ValidLicenseScore" baselineScore="0.0"> <Extension name="cellRef" value="USDBUSD28" /> <Attribute partialScore="1.0" reasonCode="LX00"> <Extension name="cellRef" value="USDCUSD30" /> <SimplePredicate field="validLicense" operator="equal" value="true" /> </Attribute> <Attribute partialScore="-1.0" reasonCode="LX00"> <Extension name="cellRef" value="USDCUSD31" /> <SimplePredicate field="validLicense" operator="equal" value="false" /> </Attribute> </Characteristic> </Characteristics> </Scorecard> </Segment> <Segment id="2"> <True /> <TreeModel modelName="SampleTree" functionName="classification" missingValueStrategy="lastPrediction" noTrueChildStrategy="returnLastPrediction"> <MiningSchema> <MiningField name="age" usageType="active" /> <MiningField name="validLicense" usageType="active" /> <MiningField name="calculatedScore" usageType="active" /> <MiningField name="qualificationLevel" usageType="predicted" /> </MiningSchema> <Output> <OutputField name="qualification" displayName="Qualification Level" dataType="string" feature="predictedValue" targetField="qualificationLevel" /> </Output> <Node score="Well" id="1"> <True/> <Node score="Barely" id="2"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="age" operator="greaterOrEqual" value="16" /> <SimplePredicate field="validLicense" operator="equal" value="true" /> </CompoundPredicate> <Node score="Barely" id="3"> <SimplePredicate field="calculatedScore" operator="lessOrEqual" value="50.0" /> </Node> <Node score="Well" id="4"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="calculatedScore" operator="greaterThan" value="50.0" /> <SimplePredicate field="calculatedScore" operator="lessOrEqual" value="60.0" /> </CompoundPredicate> </Node> <Node score="Over" id="5"> <SimplePredicate field="calculatedScore" operator="greaterThan" value="60.0" /> </Node> </Node> <Node score="Unqualified" id="6"> <CompoundPredicate booleanOperator="surrogate"> <SimplePredicate field="age" operator="lessThan" value="16" /> <SimplePredicate field="calculatedScore" operator="lessOrEqual" value="40.0" /> <True /> </CompoundPredicate> </Node> </Node> </TreeModel> </Segment> </Segmentation> </MiningModel> </PMML> Example PMML Clustering model <?xml version="1.0" encoding="UTF-8"?> <PMML version="4.1" xmlns="http://www.dmg.org/PMML-4_1"> <Header> <Application name="KNIME" version="2.8.0"/> </Header> <DataDictionary numberOfFields="5"> <DataField name="sepal_length" optype="continuous" dataType="double"> <Interval closure="closedClosed" leftMargin="4.3" rightMargin="7.9"/> </DataField> <DataField name="sepal_width" optype="continuous" dataType="double"> <Interval closure="closedClosed" leftMargin="2.0" rightMargin="4.4"/> </DataField> <DataField name="petal_length" optype="continuous" dataType="double"> <Interval closure="closedClosed" leftMargin="1.0" rightMargin="6.9"/> </DataField> <DataField name="petal_width" optype="continuous" dataType="double"> <Interval closure="closedClosed" leftMargin="0.1" rightMargin="2.5"/> </DataField> <DataField name="class" optype="categorical" dataType="string"/> </DataDictionary> <ClusteringModel modelName="SingleIrisKMeansClustering" functionName="clustering" modelClass="centerBased" numberOfClusters="4"> <MiningSchema> <MiningField name="sepal_length" invalidValueTreatment="asIs"/> <MiningField name="sepal_width" invalidValueTreatment="asIs"/> <MiningField name="petal_length" invalidValueTreatment="asIs"/> <MiningField name="petal_width" invalidValueTreatment="asIs"/> <MiningField name="class" usageType="predicted"/> </MiningSchema> <ComparisonMeasure kind="distance"> <squaredEuclidean/> </ComparisonMeasure> <ClusteringField field="sepal_length" compareFunction="absDiff"/> <ClusteringField field="sepal_width" compareFunction="absDiff"/> <ClusteringField field="petal_length" compareFunction="absDiff"/> <ClusteringField field="petal_width" compareFunction="absDiff"/> <Cluster name="virginica" size="32"> <Array n="4" type="real">6.9125000000000005 3.099999999999999 5.846874999999999 2.1312499999999996</Array> </Cluster> <Cluster name="versicolor" size="41"> <Array n="4" type="real">6.23658536585366 2.8585365853658535 4.807317073170731 1.6219512195121943</Array> </Cluster> <Cluster name="setosa" size="50"> <Array n="4" type="real">5.005999999999999 3.4180000000000006 1.464 0.2439999999999999</Array> </Cluster> <Cluster name="unknown" size="27"> <Array n="4" type="real">5.529629629629629 2.6222222222222222 3.940740740740741 1.2185185185185188</Array> </Cluster> </ClusteringModel> </PMML>
[ "<PMML version=\"4.2\" xsi:schemaLocation=\"http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.dmg.org/PMML-4_2\"> <Header copyright=\"JBoss\"/> <DataDictionary numberOfFields=\"5\"> <DataField dataType=\"double\" name=\"fld1\" optype=\"continuous\"/> <DataField dataType=\"double\" name=\"fld2\" optype=\"continuous\"/> <DataField dataType=\"string\" name=\"fld3\" optype=\"categorical\"> <Value value=\"x\"/> <Value value=\"y\"/> </DataField> <DataField dataType=\"double\" name=\"fld4\" optype=\"continuous\"/> <DataField dataType=\"double\" name=\"fld5\" optype=\"continuous\"/> </DataDictionary> <RegressionModel algorithmName=\"linearRegression\" functionName=\"regression\" modelName=\"LinReg\" normalizationMethod=\"logit\" targetFieldName=\"fld4\"> <MiningSchema> <MiningField name=\"fld1\"/> <MiningField name=\"fld2\"/> <MiningField name=\"fld3\"/> <MiningField name=\"fld4\" usageType=\"predicted\"/> <MiningField name=\"fld5\" usageType=\"target\"/> </MiningSchema> <RegressionTable intercept=\"0.5\"> <NumericPredictor coefficient=\"5\" exponent=\"2\" name=\"fld1\"/> <NumericPredictor coefficient=\"2\" exponent=\"1\" name=\"fld2\"/> <CategoricalPredictor coefficient=\"-3\" name=\"fld3\" value=\"x\"/> <CategoricalPredictor coefficient=\"3\" name=\"fld3\" value=\"y\"/> <PredictorTerm coefficient=\"0.4\"> <FieldRef field=\"fld1\"/> <FieldRef field=\"fld2\"/> </PredictorTerm> </RegressionTable> </RegressionModel> </PMML>", "<PMML version=\"4.2\" xsi:schemaLocation=\"http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.dmg.org/PMML-4_2\"> <Header copyright=\"JBoss\"/> <DataDictionary numberOfFields=\"4\"> <DataField name=\"param1\" optype=\"continuous\" dataType=\"double\"/> <DataField name=\"param2\" optype=\"continuous\" dataType=\"double\"/> <DataField name=\"overallScore\" optype=\"continuous\" dataType=\"double\" /> <DataField name=\"finalscore\" optype=\"continuous\" dataType=\"double\" /> </DataDictionary> <Scorecard modelName=\"ScorecardCompoundPredicate\" useReasonCodes=\"true\" isScorable=\"true\" functionName=\"regression\" baselineScore=\"15\" initialScore=\"0.8\" reasonCodeAlgorithm=\"pointsAbove\"> <MiningSchema> <MiningField name=\"param1\" usageType=\"active\" invalidValueTreatment=\"asMissing\"> </MiningField> <MiningField name=\"param2\" usageType=\"active\" invalidValueTreatment=\"asMissing\"> </MiningField> <MiningField name=\"overallScore\" usageType=\"target\"/> <MiningField name=\"finalscore\" usageType=\"predicted\"/> </MiningSchema> <Characteristics> <Characteristic name=\"ch1\" baselineScore=\"50\" reasonCode=\"reasonCh1\"> <Attribute partialScore=\"20\"> <SimplePredicate field=\"param1\" operator=\"lessThan\" value=\"20\"/> </Attribute> <Attribute partialScore=\"100\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"param1\" operator=\"greaterOrEqual\" value=\"20\"/> <SimplePredicate field=\"param2\" operator=\"lessOrEqual\" value=\"25\"/> </CompoundPredicate> </Attribute> <Attribute partialScore=\"200\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"param1\" operator=\"greaterOrEqual\" value=\"20\"/> <SimplePredicate field=\"param2\" operator=\"greaterThan\" value=\"25\"/> </CompoundPredicate> </Attribute> </Characteristic> <Characteristic name=\"ch2\" reasonCode=\"reasonCh2\"> <Attribute partialScore=\"10\"> <CompoundPredicate booleanOperator=\"or\"> <SimplePredicate field=\"param2\" operator=\"lessOrEqual\" value=\"-5\"/> <SimplePredicate field=\"param2\" operator=\"greaterOrEqual\" value=\"50\"/> </CompoundPredicate> </Attribute> <Attribute partialScore=\"20\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"param2\" operator=\"greaterThan\" value=\"-5\"/> <SimplePredicate field=\"param2\" operator=\"lessThan\" value=\"50\"/> </CompoundPredicate> </Attribute> </Characteristic> </Characteristics> </Scorecard> </PMML>", "<PMML version=\"4.2\" xsi:schemaLocation=\"http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.dmg.org/PMML-4_2\"> <Header copyright=\"JBOSS\"/> <DataDictionary numberOfFields=\"5\"> <DataField dataType=\"double\" name=\"fld1\" optype=\"continuous\"/> <DataField dataType=\"double\" name=\"fld2\" optype=\"continuous\"/> <DataField dataType=\"string\" name=\"fld3\" optype=\"categorical\"> <Value value=\"true\"/> <Value value=\"false\"/> </DataField> <DataField dataType=\"string\" name=\"fld4\" optype=\"categorical\"> <Value value=\"optA\"/> <Value value=\"optB\"/> <Value value=\"optC\"/> </DataField> <DataField dataType=\"string\" name=\"fld5\" optype=\"categorical\"> <Value value=\"tgtX\"/> <Value value=\"tgtY\"/> <Value value=\"tgtZ\"/> </DataField> </DataDictionary> <TreeModel functionName=\"classification\" modelName=\"TreeTest\"> <MiningSchema> <MiningField name=\"fld1\"/> <MiningField name=\"fld2\"/> <MiningField name=\"fld3\"/> <MiningField name=\"fld4\"/> <MiningField name=\"fld5\" usageType=\"predicted\"/> </MiningSchema> <Node score=\"tgtX\"> <True/> <Node score=\"tgtX\"> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optA\"/> <Node score=\"tgtX\"> <CompoundPredicate booleanOperator=\"surrogate\"> <SimplePredicate field=\"fld1\" operator=\"lessThan\" value=\"30.0\"/> <SimplePredicate field=\"fld2\" operator=\"greaterThan\" value=\"20.0\"/> </CompoundPredicate> <Node score=\"tgtX\"> <SimplePredicate field=\"fld2\" operator=\"lessThan\" value=\"40.0\"/> </Node> <Node score=\"tgtZ\"> <SimplePredicate field=\"fld2\" operator=\"greaterOrEqual\" value=\"10.0\"/> </Node> </Node> <Node score=\"tgtZ\"> <CompoundPredicate booleanOperator=\"or\"> <SimplePredicate field=\"fld1\" operator=\"greaterOrEqual\" value=\"60.0\"/> <SimplePredicate field=\"fld1\" operator=\"lessOrEqual\" value=\"70.0\"/> </CompoundPredicate> <Node score=\"tgtZ\"> <SimpleSetPredicate booleanOperator=\"isNotIn\" field=\"fld4\"> <Array type=\"string\">optA optB</Array> </SimpleSetPredicate> </Node> </Node> </Node> <Node score=\"tgtY\"> <CompoundPredicate booleanOperator=\"or\"> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optA\"/> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optC\"/> </CompoundPredicate> <Node score=\"tgtY\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"fld1\" operator=\"greaterThan\" value=\"10.0\"/> <SimplePredicate field=\"fld1\" operator=\"lessThan\" value=\"50.0\"/> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optA\"/> <SimplePredicate field=\"fld2\" operator=\"lessThan\" value=\"100.0\"/> <SimplePredicate field=\"fld3\" operator=\"equal\" value=\"false\"/> </CompoundPredicate> </Node> <Node score=\"tgtZ\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optC\"/> <SimplePredicate field=\"fld2\" operator=\"lessThan\" value=\"30.0\"/> </CompoundPredicate> </Node> </Node> </Node> </TreeModel> </PMML>", "<PMML version=\"4.2\" xsi:schemaLocation=\"http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.dmg.org/PMML-4_2\"> <Header> <Application name=\"Drools-PMML\" version=\"7.0.0-SNAPSHOT\" /> </Header> <DataDictionary numberOfFields=\"7\"> <DataField name=\"age\" optype=\"continuous\" dataType=\"double\" /> <DataField name=\"occupation\" optype=\"categorical\" dataType=\"string\"> <Value value=\"SKYDIVER\" /> <Value value=\"ASTRONAUT\" /> <Value value=\"PROGRAMMER\" /> <Value value=\"TEACHER\" /> <Value value=\"INSTRUCTOR\" /> </DataField> <DataField name=\"residenceState\" optype=\"categorical\" dataType=\"string\"> <Value value=\"AP\" /> <Value value=\"KN\" /> <Value value=\"TN\" /> </DataField> <DataField name=\"validLicense\" optype=\"categorical\" dataType=\"boolean\" /> <DataField name=\"overallScore\" optype=\"continuous\" dataType=\"double\" /> <DataField name=\"grade\" optype=\"categorical\" dataType=\"string\"> <Value value=\"A\" /> <Value value=\"B\" /> <Value value=\"C\" /> <Value value=\"D\" /> <Value value=\"F\" /> </DataField> <DataField name=\"qualificationLevel\" optype=\"categorical\" dataType=\"string\"> <Value value=\"Unqualified\" /> <Value value=\"Barely\" /> <Value value=\"Well\" /> <Value value=\"Over\" /> </DataField> </DataDictionary> <MiningModel modelName=\"SampleModelChainMine\" functionName=\"classification\"> <MiningSchema> <MiningField name=\"age\" /> <MiningField name=\"occupation\" /> <MiningField name=\"residenceState\" /> <MiningField name=\"validLicense\" /> <MiningField name=\"overallScore\" /> <MiningField name=\"qualificationLevel\" usageType=\"target\"/> </MiningSchema> <Segmentation multipleModelMethod=\"modelChain\"> <Segment id=\"1\"> <True /> <Scorecard modelName=\"Sample Score 1\" useReasonCodes=\"true\" isScorable=\"true\" functionName=\"regression\" baselineScore=\"0.0\" initialScore=\"0.345\"> <MiningSchema> <MiningField name=\"age\" usageType=\"active\" invalidValueTreatment=\"asMissing\" /> <MiningField name=\"occupation\" usageType=\"active\" invalidValueTreatment=\"asMissing\" /> <MiningField name=\"residenceState\" usageType=\"active\" invalidValueTreatment=\"asMissing\" /> <MiningField name=\"validLicense\" usageType=\"active\" invalidValueTreatment=\"asMissing\" /> <MiningField name=\"overallScore\" usageType=\"predicted\" /> </MiningSchema> <Output> <OutputField name=\"calculatedScore\" displayName=\"Final Score\" dataType=\"double\" feature=\"predictedValue\" targetField=\"overallScore\" /> </Output> <Characteristics> <Characteristic name=\"AgeScore\" baselineScore=\"0.0\" reasonCode=\"ABZ\"> <Extension name=\"cellRef\" value=\"USDBUSD8\" /> <Attribute partialScore=\"10.0\"> <Extension name=\"cellRef\" value=\"USDCUSD10\" /> <SimplePredicate field=\"age\" operator=\"lessOrEqual\" value=\"5\" /> </Attribute> <Attribute partialScore=\"30.0\" reasonCode=\"CX1\"> <Extension name=\"cellRef\" value=\"USDCUSD11\" /> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"age\" operator=\"greaterOrEqual\" value=\"5\" /> <SimplePredicate field=\"age\" operator=\"lessThan\" value=\"12\" /> </CompoundPredicate> </Attribute> <Attribute partialScore=\"40.0\" reasonCode=\"CX2\"> <Extension name=\"cellRef\" value=\"USDCUSD12\" /> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"age\" operator=\"greaterOrEqual\" value=\"13\" /> <SimplePredicate field=\"age\" operator=\"lessThan\" value=\"44\" /> </CompoundPredicate> </Attribute> <Attribute partialScore=\"25.0\"> <Extension name=\"cellRef\" value=\"USDCUSD13\" /> <SimplePredicate field=\"age\" operator=\"greaterOrEqual\" value=\"45\" /> </Attribute> </Characteristic> <Characteristic name=\"OccupationScore\" baselineScore=\"0.0\"> <Extension name=\"cellRef\" value=\"USDBUSD16\" /> <Attribute partialScore=\"-10.0\" reasonCode=\"CX2\"> <Extension name=\"description\" value=\"skydiving is a risky occupation\" /> <Extension name=\"cellRef\" value=\"USDCUSD18\" /> <SimpleSetPredicate field=\"occupation\" booleanOperator=\"isIn\"> <Array n=\"2\" type=\"string\">SKYDIVER ASTRONAUT</Array> </SimpleSetPredicate> </Attribute> <Attribute partialScore=\"10.0\"> <Extension name=\"cellRef\" value=\"USDCUSD19\" /> <SimpleSetPredicate field=\"occupation\" booleanOperator=\"isIn\"> <Array n=\"2\" type=\"string\">TEACHER INSTRUCTOR</Array> </SimpleSetPredicate> </Attribute> <Attribute partialScore=\"5.0\"> <Extension name=\"cellRef\" value=\"USDCUSD20\" /> <SimplePredicate field=\"occupation\" operator=\"equal\" value=\"PROGRAMMER\" /> </Attribute> </Characteristic> <Characteristic name=\"ResidenceStateScore\" baselineScore=\"0.0\" reasonCode=\"RES\"> <Extension name=\"cellRef\" value=\"USDBUSD22\" /> <Attribute partialScore=\"-10.0\"> <Extension name=\"cellRef\" value=\"USDCUSD24\" /> <SimplePredicate field=\"residenceState\" operator=\"equal\" value=\"AP\" /> </Attribute> <Attribute partialScore=\"10.0\"> <Extension name=\"cellRef\" value=\"USDCUSD25\" /> <SimplePredicate field=\"residenceState\" operator=\"equal\" value=\"KN\" /> </Attribute> <Attribute partialScore=\"5.0\"> <Extension name=\"cellRef\" value=\"USDCUSD26\" /> <SimplePredicate field=\"residenceState\" operator=\"equal\" value=\"TN\" /> </Attribute> </Characteristic> <Characteristic name=\"ValidLicenseScore\" baselineScore=\"0.0\"> <Extension name=\"cellRef\" value=\"USDBUSD28\" /> <Attribute partialScore=\"1.0\" reasonCode=\"LX00\"> <Extension name=\"cellRef\" value=\"USDCUSD30\" /> <SimplePredicate field=\"validLicense\" operator=\"equal\" value=\"true\" /> </Attribute> <Attribute partialScore=\"-1.0\" reasonCode=\"LX00\"> <Extension name=\"cellRef\" value=\"USDCUSD31\" /> <SimplePredicate field=\"validLicense\" operator=\"equal\" value=\"false\" /> </Attribute> </Characteristic> </Characteristics> </Scorecard> </Segment> <Segment id=\"2\"> <True /> <TreeModel modelName=\"SampleTree\" functionName=\"classification\" missingValueStrategy=\"lastPrediction\" noTrueChildStrategy=\"returnLastPrediction\"> <MiningSchema> <MiningField name=\"age\" usageType=\"active\" /> <MiningField name=\"validLicense\" usageType=\"active\" /> <MiningField name=\"calculatedScore\" usageType=\"active\" /> <MiningField name=\"qualificationLevel\" usageType=\"predicted\" /> </MiningSchema> <Output> <OutputField name=\"qualification\" displayName=\"Qualification Level\" dataType=\"string\" feature=\"predictedValue\" targetField=\"qualificationLevel\" /> </Output> <Node score=\"Well\" id=\"1\"> <True/> <Node score=\"Barely\" id=\"2\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"age\" operator=\"greaterOrEqual\" value=\"16\" /> <SimplePredicate field=\"validLicense\" operator=\"equal\" value=\"true\" /> </CompoundPredicate> <Node score=\"Barely\" id=\"3\"> <SimplePredicate field=\"calculatedScore\" operator=\"lessOrEqual\" value=\"50.0\" /> </Node> <Node score=\"Well\" id=\"4\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"calculatedScore\" operator=\"greaterThan\" value=\"50.0\" /> <SimplePredicate field=\"calculatedScore\" operator=\"lessOrEqual\" value=\"60.0\" /> </CompoundPredicate> </Node> <Node score=\"Over\" id=\"5\"> <SimplePredicate field=\"calculatedScore\" operator=\"greaterThan\" value=\"60.0\" /> </Node> </Node> <Node score=\"Unqualified\" id=\"6\"> <CompoundPredicate booleanOperator=\"surrogate\"> <SimplePredicate field=\"age\" operator=\"lessThan\" value=\"16\" /> <SimplePredicate field=\"calculatedScore\" operator=\"lessOrEqual\" value=\"40.0\" /> <True /> </CompoundPredicate> </Node> </Node> </TreeModel> </Segment> </Segmentation> </MiningModel> </PMML>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <PMML version=\"4.1\" xmlns=\"http://www.dmg.org/PMML-4_1\"> <Header> <Application name=\"KNIME\" version=\"2.8.0\"/> </Header> <DataDictionary numberOfFields=\"5\"> <DataField name=\"sepal_length\" optype=\"continuous\" dataType=\"double\"> <Interval closure=\"closedClosed\" leftMargin=\"4.3\" rightMargin=\"7.9\"/> </DataField> <DataField name=\"sepal_width\" optype=\"continuous\" dataType=\"double\"> <Interval closure=\"closedClosed\" leftMargin=\"2.0\" rightMargin=\"4.4\"/> </DataField> <DataField name=\"petal_length\" optype=\"continuous\" dataType=\"double\"> <Interval closure=\"closedClosed\" leftMargin=\"1.0\" rightMargin=\"6.9\"/> </DataField> <DataField name=\"petal_width\" optype=\"continuous\" dataType=\"double\"> <Interval closure=\"closedClosed\" leftMargin=\"0.1\" rightMargin=\"2.5\"/> </DataField> <DataField name=\"class\" optype=\"categorical\" dataType=\"string\"/> </DataDictionary> <ClusteringModel modelName=\"SingleIrisKMeansClustering\" functionName=\"clustering\" modelClass=\"centerBased\" numberOfClusters=\"4\"> <MiningSchema> <MiningField name=\"sepal_length\" invalidValueTreatment=\"asIs\"/> <MiningField name=\"sepal_width\" invalidValueTreatment=\"asIs\"/> <MiningField name=\"petal_length\" invalidValueTreatment=\"asIs\"/> <MiningField name=\"petal_width\" invalidValueTreatment=\"asIs\"/> <MiningField name=\"class\" usageType=\"predicted\"/> </MiningSchema> <ComparisonMeasure kind=\"distance\"> <squaredEuclidean/> </ComparisonMeasure> <ClusteringField field=\"sepal_length\" compareFunction=\"absDiff\"/> <ClusteringField field=\"sepal_width\" compareFunction=\"absDiff\"/> <ClusteringField field=\"petal_length\" compareFunction=\"absDiff\"/> <ClusteringField field=\"petal_width\" compareFunction=\"absDiff\"/> <Cluster name=\"virginica\" size=\"32\"> <Array n=\"4\" type=\"real\">6.9125000000000005 3.099999999999999 5.846874999999999 2.1312499999999996</Array> </Cluster> <Cluster name=\"versicolor\" size=\"41\"> <Array n=\"4\" type=\"real\">6.23658536585366 2.8585365853658535 4.807317073170731 1.6219512195121943</Array> </Cluster> <Cluster name=\"setosa\" size=\"50\"> <Array n=\"4\" type=\"real\">5.005999999999999 3.4180000000000006 1.464 0.2439999999999999</Array> </Cluster> <Cluster name=\"unknown\" size=\"27\"> <Array n=\"4\" type=\"real\">5.529629629629629 2.6222222222222222 3.940740740740741 1.2185185185185188</Array> </Cluster> </ClusteringModel> </PMML>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/pmml-examples-ref_pmml-models
Chapter 3. Common concepts
Chapter 3. Common concepts 3.1. Types The API uses the type concept to describe the different kinds of objects accepted and returned. There are three relevant kinds of types: Primitive types Describe simple kinds of objects, like strings or integers . Enumerated types Describe lists of valid values like VmStatus or DiskFormat . Structured types Describe structured objects, with multiple attributes and links, like Vm or Disk . 3.2. Identified types Many of the types used by the API represent identified objects, objects that have an unique identifier and exist independently of other objects. The types used to describe those objects extend the Identified type, which contains the following set of common attributes: Attribute Type Description id String Each object in the virtualization infrastructure contains an id , which acts as an unique identifier. href String The canonical location of the object as an absolute path. name String A user-supplied human readable name for the object. The name name is unique across all objects of the same type. description String A free-form user-supplied human readable description of the object. Important Currently for most types of objects the id attribute is actually a randomly generated UUID , but this is an implementation detail, and users should not rely on that, as it may change in the future. Instead users should assume that these identifiers are just strings. 3.3. Objects Objects are the individual instances of the types supported by the API. For example, the virtual machine with identifier 123 is an object of the Vm type. 3.4. Collections A collection is a set of objects of the same type. 3.5. Representations The state of objects needs to be represented when it is transferred beetween the client and the server. The API supports XML and JSON as the representation of the state of objects, both for input and output. 3.5.1. XML representation The XML representation of an object consists of an XML element corresponding to the type of the object, XML attributes for the id and href attributes, and nested XML elements for the rest of the attributes. For example, the XML representation for a virtual machine appears as follows: <vm id="123" href="/ovirt-engine/api/vms/123"> <name>myvm</name> <description>My VM</description> <memory>1073741824</memory> ... </vm> The XML representation of a collection of objects consists of an XML element, named after the type of the objects, in plural. This contains the representations of the objects of the collection. For example, the XML respresentation for a collection of virtual machines appears as follows: <vms> <vm id="123" href="/ovirt-engine/api/vms/123"> <name>yourvm</name> <description>Your VM</description> <memory>1073741824</memory> ... </vm> <vm id="456" href="/ovirt-engine/api/vms/456"> <name>myname</name> <description>My description</description> <memory>2147483648</memory> ... </vm> ... </vms> Important In the XML representation of objects the id and href attributes are the only ones that are represented as XML attributes, the rest are represented as nested XML elements. 3.5.2. JSON representation The JSON representation of an object consists of a JSON document containing a name/value pair for each attribute (including id and href ). For example, the JSON representation of a virtual machine appears as follows: The JSON representation of a collection of objects consists of a JSON document containg a name/value pair (named ater the type of the objects, in singular) which in turn contains an array with the representations of the objects of the collection. For example, the JSON respresentation for a collection of virtual machines appears as follows: 3.6. Services Services are the parts of the server responsible for retrieving, adding updating, removing and executing actions on the objects supported by the API. There are two relevant kinds of services: Services that manage a collection of objects These services are reponsible for listing existing objects and adding new objects. For example, the Vms service is responsible for managing the collection of virtual machines available in the system. Services that manage a specific object These services are responsible for retrieving, updating, deleting and executing actions in specific objects. For example, the Vm service is responsible for managing a specific virtual machine. Each service is accessible via a particular path within the server. For example, the service that manages the collection of virtual machines available in the system is available in the via the path /vms , and the service that manages the virtual machine 123 is available via the path /vms/123 . All kinds of services have a set of methods that represent the operations that they can perform. The services that manage collections of objects usually have the list and add methods. The services that manage specific objects usually have the get , update and remove methods. In addition, services may also have action methods, that represent less common operations. For example, the Vm service has a start method that is used to start a virtual machine. For the more usual methods there is a direct mapping between the name of the method and the name of the HTTP method: Method name HTTP method add POST get GET list GET update PUT remove DELETE The path used in the HTTP request is the path of the service, with the /ovirt-engine/api prefix. For example, the request to list the virtual machines should be like this, using the HTTP GET method and the path /vms : For action methods the HTTP method is always POST , and the name of the method is added as a suffix to the path. For example, the request to start virtual machine 123 should look like this, using the HTTP POST method and the path /vms/123/start : Each method has a set of parameters. Parameters are classified into two categories: Main parameter The main parameter corresponds the object or collection that is retrieved, added or updated. This only applies to the add , get , list and update methods, and there will be exactly one such main parameter per method. Secondary parameters The rest of the parameters. For example, the operation that adds a virtual machine (see here ) has three parameters: vm , clone and clone_permissions . The main parameter is vm , as it describes the object that is added. The clone and clone_permissions parameters are secondary parameters. The main parameter, when used for input, must be included in the body of the HTTP request. For example, when adding a virtual machine, the vm parameter, of type Vm , must be included in the request body. So the complete request to add a virtual machine, including all the HTTP details, must look like this: When used for output, the main parameters are included in the response body. For example, when adding a virtual machine, the vm parameter will be included in the response body. So the complete response body will look like this: Secondary parameters are only allowed for input (except for action methods, which are described later), and they must be included as query parameters. For example, when adding a virtual machine with the clone parameter set to true , the complete request must look like this: Action methods only have secondary parameters. They can be used for input and output, and they should be included in the request body, wrapped with an action element. For example, the action method used to start a virtual machine (see here ) has a vm parameter to describe how the virtual machine should be started, and a use_cloud_init parameter to specify if cloud-init should be used to configure the guest operating system. So the complete request to start virtual machine 123 using cloud-init will look like this when using XML: 3.7. Searching The list method of some services has a search parameter that can be used to specify search criteria. When used, the server will only return objects within the collection that satisfy those criteria. For example, the following request will return only the virtual machine named myvm : 3.7.1. Maximum results parameter Use the max parameter to limit the number of objects returned. For example, the following request will only return one virtual machine, regardless of how many are available in the system: A search request without the max parameter will return all the objects. Specifying the max parameter is recommended to reduce the impact of requests in the overall performance of the system. 3.7.2. Case sensitivity By default queries are not case sensitive. For example, the following request will return the virtual machines named myvm , MyVM and MYVM : The optional case_sensitive boolean parameter can be used to change this behaviour. For example, to get exactly the virtual machine named myhost , and not MyHost or MYHOST , send a request like this: 3.7.3. Search syntax The search parameters use the same syntax as the Red Hat Virtualization query language: The sortby clause is optional and only needed when ordering results. Example search queries: Collection Criteria Result hosts vms.status=up Returns a list of all hosts running virtual machines that are up . vms domain=example.com Returns a list of all virtual machines running on the specified domain. vms users.name=mary Returns a list of all virtual machines belonging to users with the user name mary . events severity > normal sortby time Returns a list of all events with severity higher than normal and sorted by the the value of their time attribute. events severity > normal sortby time desc Returns a list of all events with severity higher than normal and sorted by the the value of their time attribute in descending order. The value of the search parameter must be URL-encoded to translate reserved characters, such as operators and spaces. For example, the equal sign should be encoded as %3D : 3.7.4. Wildcards The asterisk can be used as part of a value, to indicate that any string matches, including the emtpy string. For example, the following request will return all the virtual machines with names beginning with myvm , such as myvm , myvm2 , myvma or myvm-webserver : 3.7.5. Pagination Some Red Hat Virtualization environments contain large collections of objects. Retrieving all of them with one request isn't practical, and hurts performace. To allow retrieving them page by page the search parameter supports an optional page clause. This, combined with the max parameter, is the basis for paging. For example, to get the first page of virtual machines, with a page size of 10 virtual machines, send request like this: Note The search parameter is URL-encoded, the actual value of the search parameter, before encoding, is page 1 , so this is actually requesting the first page. Increase the page value to retrieve the page: The page clause can be used in conjunction with other clauses inside the search parameter. For example, the following request will return the second page of virtual machines, but sorting by name: Important The API is stateless; it is not possible to retain a state between different requests since all requests are independent from each other. As a result, if a status change occurs between your requests, then the page results may be inconsistent. For example, if you request a specific page from a list of virtual machines, and virtual machines are created or removed before you request the page, then your results may be missing some of them, or contain duplicates. 3.8. Following links The API returns references to related objects as links . For example, when a virtual machine is retrieved it contains links to its disk attachments and network interface cards: <vm id="123" href="/ovirt-engine/api/vms/123"> ... <link rel="diskattachments" href="/ovirt-engine/api/vms/123/diskattachments"/> <link rel="nics" href="/ovirt-engine/api/vms/123/nics"/> ... </vm> The complete description of those linked objects can be retrieved by sending separate requests: However, in some situations it is more convenient for the application using the API to retrieve the linked information in the same request. This is useful, for example, when the additional network round trips introduce an unacceptable overhead, or when the multiple requests complicate the code of the application in an unacceptable way. For those use cases the API provides a follow parameter that allows the application to retrieve the linked information using only one request. The value of the follow parameter is a list of strings, separated by commas. Each of those strings is the path of the linked object. For example, to retrieve the disk attachments and the NICs in the example above the request should be like this: That will return an response like this: <vm id="123" href="/ovirt-engine/api/vms/123"> ... <disk_attachments> <disk_attachment id="456" href="/ovirt-engine/api/vms/123/diskattachments/456"> <active>true</active> <bootable>true</bootable> <interface>virtio_scsi</interface> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk id="789" href="/ovirt-engine/api/disks/789"/> </disk_attachment> ... </disk_attacments> <nics> <nic id="234" href="/ovirt-engine/api/vms/123/nics/234"> <name>eth0</name> <interface>virtio</interface> <linked>true</linked> <mac> <address>00:1a:4a:16:01:00</address> </mac> <plugged>true</plugged> </nic> ... </nics> ... </vm> The path to the linked object can be a single word, as in the example, or it can be a sequence of words, separated by dots, to request nested data. For example, the example used disk_attachments in order to retrieve the complete description of the disk attachments, but each disk attachment contains a link to the disk, which wasn't followed . In order to also follow the links to the disks, the following request can be used: That will result in the following response: <vm id="123" href="/ovirt-engine/api/vms/123"> <disk_attachments> <disk_attachment id="456" href="/ovirt-engine/api/vms/123/diskattachments/456"> <active>true</active> <bootable>true</bootable> <interface>virtio_scsi</interface> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk id="789" href="/ovirt-engine/api/disks/789"> <name>mydisk</name> <description>My disk</description> <actual_size>0</actual_size> <format>raw</format> <sparse>true</sparse> <status>ok</status> <storage_type>image</storage_type> <total_size>0</total_size> ... </disk> </disk_attachment> ... </disk_attachments> ... </vm> The path can be made as deep as needed. For example, to also get the statistics of the disks: Multiple path elements and multiple paths can be combined. For example, to get the disk attachments and the network interface cards, both with their statistics: Important Almost all the operations that retrieve objects support the follow parameter, but make sure to explicitly check the reference documentation, as some operations may not support it, or may provide advice on how to use it to get the best performance. Important Using the follow parameter moves the overhead from the client side to the server side. When you request additional data, the server must fetch and merge it with the basic data. That consumes CPU and memory in the server side, and will in most cases require additional database queries. That may adversely affect the performance of the server, especially in large scale environments. Make sure to test your application in a realistic environment, and use the follow parameter only when justified. 3.9. Permissions Many of the services that manage a single object provide a reference to a permissions service that manages the permissions assigned to that object. Each permission contains links to the user or group, the role and the object. For example, the permissions assigned to a specific virtual machine can be retrieved sending a request like this: The response body will look like this: <permissions> <permission id="456" href="/ovirt-engien/api/vms/123/permissions/456"> <user id="789" href="/ovirt-engine/api/users/789"/> <role id="abc" href="/ovirt-engine/api/roles/abc"/> <vm id="123" href="/ovirt-engine/api/vms/123"/> </permission> ... </permissions> A permission is added to an object sending a POST request with a permission representation to this service. Each new permission requires a role and a user. 3.10. Handling errors Some errors require further explanation beyond a standard HTTP status code. For example, the API reports an unsuccessful object state update or action with a fault in the response body. The fault contains the reason and detail attributes. For example, when the server receives a request to create a virtual machine without the mandatory name attribute it will respond with the following HTTP response line: And the following response body: <fault> <reason>Incomplete parameters</reason> <detail>Vm [name] required for add</detail> </fault>
[ "<vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"> <name>myvm</name> <description>My VM</description> <memory>1073741824</memory> </vm>", "<vms> <vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"> <name>yourvm</name> <description>Your VM</description> <memory>1073741824</memory> </vm> <vm id=\"456\" href=\"/ovirt-engine/api/vms/456\"> <name>myname</name> <description>My description</description> <memory>2147483648</memory> </vm> </vms>", "{ \"id\": \"123\", \"href\": \"/ovirt-engine/api/vms/123\", \"name\": \"myvm\", \"description\": \"My VM\", \"memory\": 1073741824, }", "{ \"vm\": [ { \"id\": \"123\", \"href\": \"/ovirt-engine/api/vms/123\", \"name\": \"myvm\", \"description\": \"My VM\", \"memory\": 1073741824, }, { \"id\": \"456\", \"href\": \"/ovirt-engine/api/vms/456\", \"name\": \"yourvm\", \"description\": \"Your VM\", \"memory\": 2147483648, }, ] }", "GET /ovirt-engine/api/vms", "POST /ovirt-engine/api/vms/123/start", "POST /ovirt-engine/api/vms HTTP/1.1 Host: myengine.example.com Authorization: Bearer fqbR1ftzh8wBCviLxJcYuV5oSDI= Content-Type: application/xml Accept: application/xml <vm> <name>myvm</name> <description>My VM</description> <cluster> <name>Default</name> </cluster> <template> <name>Blank</name> </template> </vm>", "HTTP/1.1 201 Created Content-Type: application/xml <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"> <name>myvm</name> <description>My VM</description> </vm>", "POST /ovirt-engine/api/vms?clone=true HTTP/1.1 Host: myengine.example.com Authorization: Bearer fqbR1ftzh8wBCviLxJcYuV5oSDI= Content-Type: application/xml Accept: application/xml <vm> <name>myvm</name> <description>My VM</description> <cluster> <name>Default</name> </cluster> <template> <name>Blank</name> </template> </vm>", "POST /ovirt-engine/api/vms/123/start HTTP/1.1 Host: myengine.example.com Authorization: Bearer fqbR1ftzh8wBCviLxJcYuV5oSDI= Content-Type: application/xml Accept: application/xml <action> <use_cloud_init>true</use_cloud_init> <vm> <initialization> <nic_configurations> <nic_configuration> <name>eth0</name> <on_boot>true</on_boot> <boot_protocol>static</boot_protocol> <ip> <address>192.168.0.100</address> <netmask>255.255.255.0</netmask> <gateway>192.168.0.1</netmask> </ip> </nic_configuration> </nic_configurations> <dns_servers>192.168.0.1</dns_servers> </initialization> </vm> </action>", "GET /ovirt-engine/api/vms?search=name%3Dmyvm", "GET /ovirt-engine/api/vms?max=1", "GET /ovirt-engine/api/vms?search=name%3Dmyvm", "GET /ovirt-engine/api/vms?search=name%3D=myvm&case_sensitive=true", "(criteria) [sortby (element) asc|desc]", "GET /ovirt-engine/api/vms?search=name%3Dmyvm", "GET /ovirt-engine/api/vms?search=name%3Dmyvm*", "GET /ovirt-engine/api/vms?search=page%201&max=10", "GET /ovirt-engine/api/vms?search=page%202&max=10", "GET /ovirt-engine/api/vms?search=sortby%20name%20page%202&max=10", "<vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"> <link rel=\"diskattachments\" href=\"/ovirt-engine/api/vms/123/diskattachments\"/> <link rel=\"nics\" href=\"/ovirt-engine/api/vms/123/nics\"/> </vm>", "GET /ovirt-engine/api/vms/123/diskattachments GET /ovirt-engine/api/vms/123/nics", "GET /ovirt-engine/api/vms/123?follow=disk_attachments,nics", "<vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"> <disk_attachments> <disk_attachment id=\"456\" href=\"/ovirt-engine/api/vms/123/diskattachments/456\"> <active>true</active> <bootable>true</bootable> <interface>virtio_scsi</interface> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk id=\"789\" href=\"/ovirt-engine/api/disks/789\"/> </disk_attachment> </disk_attacments> <nics> <nic id=\"234\" href=\"/ovirt-engine/api/vms/123/nics/234\"> <name>eth0</name> <interface>virtio</interface> <linked>true</linked> <mac> <address>00:1a:4a:16:01:00</address> </mac> <plugged>true</plugged> </nic> </nics> </vm>", "POST /ovirt-engine/api/vms/123?follow=disk_attachments.disk", "<vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"> <disk_attachments> <disk_attachment id=\"456\" href=\"/ovirt-engine/api/vms/123/diskattachments/456\"> <active>true</active> <bootable>true</bootable> <interface>virtio_scsi</interface> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk id=\"789\" href=\"/ovirt-engine/api/disks/789\"> <name>mydisk</name> <description>My disk</description> <actual_size>0</actual_size> <format>raw</format> <sparse>true</sparse> <status>ok</status> <storage_type>image</storage_type> <total_size>0</total_size> </disk> </disk_attachment> </disk_attachments> </vm>", "POST /ovirt-engine/api/vms/123?follow=disk_attachments.disk.statistics", "POST /ovirt-engine/api/vms/123?follow=disk_attachments.disk.statistics,nics.statistics", "GET /ovirt-engine/api/vms/123/permissions", "<permissions> <permission id=\"456\" href=\"/ovirt-engien/api/vms/123/permissions/456\"> <user id=\"789\" href=\"/ovirt-engine/api/users/789\"/> <role id=\"abc\" href=\"/ovirt-engine/api/roles/abc\"/> <vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"/> </permission> </permissions>", "HTTP/1.1 400 Bad Request", "<fault> <reason>Incomplete parameters</reason> <detail>Vm [name] required for add</detail> </fault>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/rest_api_guide/documents-003_common_concepts
Operator APIs
Operator APIs OpenShift Container Platform 4.17 Reference guide for Operator APIs Red Hat OpenShift Documentation Team
[ "More specifically, given an OperatorPKI with <name>, the CNO will manage:", "More specifically, given an OperatorPKI with <name>, the CNO will manage:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/operator_apis/index
Appendix A. Reference material
Appendix A. Reference material A.1. About MTR command-line arguments The following is a detailed description of the available MTR command line arguments. Note To run the MTR command without prompting, for example when executing from a script, you must use the following arguments: --batchMode --overwrite --input --target Table A.1. MTR CLI arguments Argument Description --additionalClassPath A space-delimited list of additional JAR files or directories to add to the class path so that they are available for decompilation or other analysis. --addonDir Add the specified directory as a custom add-on repository. --analyzeKnownLibraries Flag to analyze known software artifacts embedded within your application. By default, MTR only analyzes application code. Note This option may result in a longer execution time and a large number of migration issues being reported. --batchMode Flag to specify that MTR should be run in a non-interactive mode without prompting for confirmation. This mode takes the default values for any parameters not passed in to the command line. --debug Flag to run MTR in debug mode. --disableTattletale Flag to disable generation of the Tattletale report. If both enableTattletale and disableTattletale are set to true, then disableTattletale will be ignored and the Tattletale report will still be generated. --discoverPackages Flag to list all available packages in the input binary application. --enableClassNotFoundAnalysis Flag to enable analysis of Java files that are not available on the class path. This should not be used if some classes will be unavailable at analysis time. --enableCompatibleFilesReport Flag to enable generation of the Compatible Files report. Due to processing all files without found issues, this report may take a long time for large applications. --enableTattletale Flag to enable generation of a Tattletale report for each application. This option is enabled by default when eap is in the included target. If both enableTattletale and disableTattletale are set to true, then disableTattletale will be ignored and the Tattletale report will still be generated. --enableTransactionAnalysis [Technology Preview] Flag to enable generation of a Transactions report that displays the call stack, which executes operations on relational database tables. The Enable Transaction Analysis feature supports Spring Data JPA and the traditional preparedStatement() method for SQL statement execution. It does not support ORM frameworks, such as Hibernate. Note enableTransactionAnalysis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. --excludePackages A space-delimited list of packages to exclude from evaluation. For example, entering com.mycompany.commonutilities excludes all classes whose package names begin with com.mycompany.commonutilities . --excludeTags A space-delimited list of tags to exclude. When specified, rules with these tags will not be processed. To see the full list of tags, use the --listTags argument. --explodedApp Flag to indicate that the provided input directory contains source files for a single application. --exportCSV Flag to export the report data to a CSV file on your local file system. MTR creates the file in the directory specified by the --output argument. The CSV file can be imported into a spreadsheet program for data manipulation and analysis. --exportSummary Flag to instruct MTR to generate an analysisSummary.json export file in the output directory. The file contains analysis summary information for each application analyzed, including the number of incidents and story points by category, and all of the technology tags associated with the analyzed applications. --exportZipReport This argument creates a reports.zip file in the output folder. The file contains the analysis output, typically, the reports. If requested, it can also contain the CSV export files. --help Display the MTR help message. --immutableAddonDir Add the specified directory as a custom read-only add-on repository. --includeTags A space-delimited list of tags to use. When specified, only rules with these tags will be processed. To see the full list of tags, use the --listTags argument. --input A space-delimited list of the path to the file or directory containing one or more applications to be analyzed. This argument is required. --install Specify add-ons to install. The syntax is <GROUP_ID>:<ARTIFACT_ID>[:<VERSION>] . For example, --install core-addon-x or --install org.example.addon:example:1.0.0 . --keepWorkDirs Flag to instruct MTR to not delete temporary working files, such as the graph database and extracted archive files. This is useful for debugging purposes. --legacyReports Flag to instruct MTR to generate the old format reports instead of the new format reports. --list Flag to list installed add-ons. --listSourceTechnologies Flag to list all available source technologies. --listTags Flag to list all available tags. --listTargetTechnologies Flag to list all available target technologies. --mavenize Flag to create a Maven project directory structure based on the structure and content of the application. This creates pom.xml files using the appropriate Java EE API and the correct dependencies between project modules. See also the --mavenizeGroupId option. --mavenizeGroupId When used with the --mavenize option, all generated pom.xml files will use the provided value for their <groupId> . If this argument is omitted, MTR will attempt to determine an appropriate <groupId> based on the application, or will default to com.mycompany.mavenized . --online Flag to allow network access for features that require it. Currently only validating XML schemas against external resources relies on Internet access. Note that this comes with a performance penalty. --output Specify the path to the directory to output the report information generated by MTR. --overwrite Flag to force delete the existing output directory specified by --output . If you do not specify this argument and the --output directory exists, you are prompted to choose whether to overwrite the contents. Important Do not overwrite a report output directory that contains important information. --packages A space-delimited list of the packages to be evaluated by MTR. It is highly recommended to use this argument. --remove Remove the specified add-ons. The syntax is <GROUP_ID>:<ARTIFACT_ID>[:<VERSION>] . For example, --remove core-addon-x or --remove org.example.addon:example:1.0.0 . --skipReports Flag to indicate that HTML reports should not be generated. A common use of this argument is when exporting report data to a CSV file using --exportCSV . --source A space-delimited list of one or more source technologies, servers, platforms, or frameworks to migrate from. This argument, in conjunction with the --target argument, helps to determine which rulesets are used. Use the --listSourceTechnologies argument to list all available sources. --sourceMode Flag to indicate that the application to be evaluated contains source files rather than compiled binaries. The sourceMode argument has been deprecated. There is no longer the need to specify it. MTR can intuitively process any inputs that are presented to it. In addition, project source folders can be analyzed with binary inputs within the same analysis execution. --target A space-delimited list of one or more target technologies, servers, platforms, or frameworks to migrate to. This argument, in conjunction with the --source argument, helps to determine which rulesets are used. Use the --listTargetTechnologies argument to list all available targets. --userIgnorePath Specify a location, in addition to USD{user.home}/.mtr/ignore/ , for MTR to identify files that should be ignored. --userLabelsDirectory Specify a location for MTR to look for custom Target Runtime Labels. The value can be a directory containing label files or a single label file. The Target Runtime Label files must use the .windup.label.xml suffix. The shipped Target Runtime Labels are defined within USDMTR_HOME/rules/migration-core/core.windup.label.xml . --userRulesDirectory Specify a location, in addition to <MTR_HOME>/rules/ and USD{user.home}/.mtr/rules/ , for MTR to look for custom MTR rules. The value can be a directory containing ruleset files or a single ruleset file. The ruleset files must use the .windup.xml suffix. --version Display the MTR version. --skipSourceCodeReports This option skips generating a Source code report when generating the application analysis report. Use this advanced option when concerned about source code information appearing in the application analysis. A.1.1. Specifying the input A space-delimited list of the path to the file or directory containing one or more applications to be analyzed. This argument is required. Usage Depending on whether the input file type provided to the --input argument is a file or directory, it will be evaluated as follows depending on the additional arguments provided. Directory --explodedApp --sourceMode Neither Argument The directory is evaluated as a single application. The directory is evaluated as a single application. Each subdirectory is evaluated as an application. File --explodedApp --sourceMode Neither Argument Argument is ignored; the file is evaluated as a single application. The file is evaluated as a compressed project. The file is evaluated as a single application. A.1.2. Specifying the output directory Specify the path to the directory to output the report information generated by MTR. Usage If omitted, the report will be generated in an <INPUT_ARCHIVE_OR_DIRECTORY>.report directory. If the output directory exists, you will be prompted with the following (with a default of N). However, if you specify the --overwrite argument, MTR will proceed to delete and recreate the directory. See the description of this argument for more information. A.1.3. Setting the source technology A space-delimited list of one or more source technologies, servers, platforms, or frameworks to migrate from. This argument, in conjunction with the --target argument, helps to determine which rulesets are used. Use the --listSourceTechnologies argument to list all available sources. Usage The --source argument now provides version support, which follows the Maven version range syntax . This instructs MTR to only run the rulesets matching the specified versions. For example, --source eap:5 . Warning When migrating to JBoss EAP, be sure to specify the version, for example, eap:6 . Specifying only eap will run rulesets for all versions of JBoss EAP, including those not relevant to your migration path. See Supported migration paths in Introduction to the Migration Toolkit for Runtimes for the appropriate JBoss EAP version. A.1.4. Setting the target technology A space-delimited list of one or more target technologies, servers, platforms, or frameworks to migrate to. This argument, in conjunction with the --source argument, helps to determine which rulesets are used. If you do not specify this option, you are prompted to select a target. Use the --listTargetTechnologies argument to list all available targets. Usage The --target argument now provides version support, which follows the Maven version range syntax . This instructs MTR to only run the rulesets matching the specified versions. For example, --target eap:7 . Warning When migrating to JBoss EAP, be sure to specify the version in the target, for example, eap:6 . Specifying only eap will run rulesets for all versions of JBoss EAP, including those not relevant to your migration path. See Supported migration paths in Introduction to the Migration Toolkit for Runtimes for the appropriate JBoss EAP version. A.1.5. Selecting packages A space-delimited list of the packages to be evaluated by MTR. It is highly recommended to use this argument. Usage In most cases, you are interested only in evaluating custom application class packages and not standard Java EE or third party packages. The <PACKAGE_N> argument is a package prefix; all subpackages will be scanned. For example, to scan the packages com.mycustomapp and com.myotherapp , use --packages com.mycustomapp com.myotherapp argument on the command line. While you can provide package names for standard Java EE third party software like org.apache , it is usually best not to include them as they should not impact the migration effort. Warning If you omit the --packages argument, every package in the application is scanned, which can impact performance. A.2. Supported technology tags The following technology tags are supported in MTR 1.2.7: 0MQ Client 3scale Acegi Security AcrIS Security ActiveMQ library Airframe Airlift Log Manager AKKA JTA Akka Testkit Amazon SQS Client AMQP Client Anakia AngularFaces ANTLR StringTemplate AOP Alliance Apache Accumulo Client Apache Aries Apache Commons JCS Apache Commons Validator Apache Flume Apache Geronimo Apache Hadoop Apache HBase Client Apache Ignite Apache Karaf Apache Mahout Apache Meecrowave JTA Apache Sirona JTA Apache Synapse Apache Tapestry Apiman Applet Arquillian AspectJ Atomikos JTA Avalon Logkit Axion Driver Axis Axis2 BabbageFaces Bean Validation BeanInject Blaze Blitz4j BootsFaces Bouncy Castle ButterFaces Cache API Cactus Camel Camel Messaging Client Camunda Cassandra Client CDI Cfg Engine Chunk Templates Cloudera Coherence Common Annotations Composite Logging Composite Logging JCL Concordion CSS Cucumber Dagger DbUnit Demoiselle JTA Derby Driver Drools DVSL Dynacache EAR Deployment Easy Rules EasyMock Eclipse RCP EclipseLink Ehcache EJB EJB XML Elasticsearch Entity Bean EtlUnit Eureka Everit JTA Evo JTA Feign File system Logging FormLayoutMaker FreeMarker Geronimo JTA GFC Logging GIN GlassFish JTA Google Guice Grails Grapht DI Guava Testing GWT H2 Driver Hamcrest Handlebars HavaRunner Hazelcast Hdiv Hibernate Hibernate Cfg Hibernate Mapping Hibernate OGM HighFaces HornetQ Client HSQLDB Driver HTTP Client HttpUnit ICEfaces Ickenham Ignite JTA Ikasan iLog Infinispan Injekt for Kotlin Iroh Istio Jamon Jasypt Java EE Batch Java EE Batch API Java EE JACC Java EE JAXB Java EE JAXR Java EE JSON-P Java Transaction API JavaFX JavaScript Javax Inject JAX-RS JAX-WS JayWire JBehave JBoss Cache JBoss EJB XML JBoss logging JBoss Transactions JBoss Web XML JBossMQ Client JBPM JCA Jcabi Log JCache JCunit JDBC JDBC datasources JDBC XA datasources Jersey Jetbrick Template Jetty JFreeChart JFunk JGoodies JMock JMockit JMS Connection Factory JMS Queue JMS Topic JMustache JNA JNI JNLP JPA entities JPA Matchers JPA named queries JPA XML JSecurity JSF JSF Page JSilver JSON-B JSP Page JSTL JTA Jukito JUnit Ka DI Keyczar Kibana KLogger Kodein Kotlin Logging KouInject KumuluzEE JTA LevelDB Client Liferay LiferayFaces Lift JTA Log.io Log4J Log4s Logback Logging Utils Logstash Lumberjack Macros Magicgrouplayout Mail Management EJB MapR MckoiSQLDB Driver Memcached Message (MDB) Micro DI Micrometer Microsoft SQL Driver MiGLayout MinLog Mixer Mockito MongoDB Client Monolog Morphia MRules Mule Mule Functional Test Framework MultithreadedTC Mycontainer JTA MyFaces MySQL Driver Narayana Arjuna Needle Neo4j NLOG4J Nuxeo JTA/JCA OACC OAUTH OCPsoft Logging Utils OmniFaces OpenFaces OpenPojo OpenSAML OpenWS OPS4J Pax Logging Service Oracle ADF Oracle DB Driver Oracle Forms Orion EJB XML Orion Web XML Oscache OTR4J OW2 JTA OW2 Log Util OWASP CSRF Guard OWASP ESAPI Peaberry Pega Persistence units Petals EIP PicketBox PicketLink PicoContainer Play Play Test Plexus Container Polyforms DI Portlet PostgreSQL Driver PowerMock PrimeFaces Properties Qpid Client RabbitMQ Client RandomizedTesting Runner Resource Adapter REST Assured Restito RichFaces RMI RocketMQ Client Rythm Template Engine SAML Santuario Scalate Scaldi Scribe Seam Security Realm ServiceMix Servlet ShiftOne Shiro Silk DI SLF4J Snippetory Template Engine SNMP4J Socket handler logging Spark Specsy Spock Spring Spring Batch Spring Boot Spring Boot Actuator Spring Boot Cache Spring Boot Flo Spring Cloud Config Spring Cloud Function Spring Data Spring Data JPA spring DI Spring Integration Spring JMX Spring Messaging Client Spring MVC Spring Properties Spring Scheduled Spring Security Spring Shell Spring Test Spring Transactions Spring Web SQLite Driver SSL Standard Widget Toolkit (SWT) Stateful (SFSB) Stateless (SLSB) Sticky Configured Stripes Struts SubCut Swagger SwarmCache Swing SwitchYard Syringe Talend ESB Teiid TensorFlow Test Interface TestNG Thymeleaf TieFaces tinylog Tomcat Tornado Inject Trimou Trunk JGuard Twirl Twitter Util Logging UberFire Unirest Unitils Vaadin Velocity Vlad Water Template Engine Web Services Metadata Web Session Web XML File WebLogic Web XML Webmacro WebSocket WebSphere EJB WebSphere EJB Ext WebSphere Web XML WebSphere WS Binding WebSphere WS Extension Weka Weld WF Core JTA Wicket Winter WSDL WSO2 WSS4J XACML XFire XMLUnit Zbus Client Zipkin A.3. About rule story points A.3.1. What are story points? Story points are an abstract metric commonly used in Agile software development to estimate the level of effort needed to implement a feature or change. The Migration Toolkit for Runtimes uses story points to express the level of effort needed to migrate particular application constructs, and the application as a whole. It does not necessarily translate to man-hours, but the value should be consistent across tasks. A.3.2. How story points are estimated in rules Estimating the level of effort for the story points for a rule can be tricky. The following are the general guidelines MTR uses when estimating the level of effort required for a rule. Level of Effort Story Points Description Information 0 An informational warning with very low or no priority for migration. Trivial 1 The migration is a trivial change or a simple library swap with no or minimal API changes. Complex 3 The changes required for the migration task are complex, but have a documented solution. Redesign 5 The migration task requires a redesign or a complete library change, with significant API changes. Rearchitecture 7 The migration requires a complete rearchitecture of the component or subsystem. Unknown 13 The migration solution is not known and may need a complete rewrite. A.3.3. Task category In addition to the level of effort, you can categorize migration tasks to indicate the severity of the task. The following categories are used to group issues to help prioritize the migration effort. Mandatory The task must be completed for a successful migration. If the changes are not made, the resulting application will not build or run successfully. Examples include replacement of proprietary APIs that are not supported in the target platform. Optional If the migration task is not completed, the application should work, but the results may not be optimal. If the change is not made at the time of migration, it is recommended to put it on the schedule soon after your migration is completed. Potential The task should be examined during the migration process, but there is not enough detailed information to determine if the task is mandatory for the migration to succeed. An example of this would be migrating a third-party proprietary type where there is no directly compatible type. Information The task is included to inform you of the existence of certain files. These may need to be examined or modified as part of the modernization effort, but changes are typically not required. For more information on categorizing tasks, see Using custom rule categories . A.4. Additional Resources A.4.1. Contributing to the project To help the Migration Toolkit for Runtimes cover most application constructs and server configurations, including yours, you can help with any of the following items: Send an email to [email protected] and let us know what MTR migration rules must cover. Provide example applications to test migration rules. Identify application components and problem areas that might be difficult to migrate: Write a short description of the problem migration areas. Write a brief overview describing how to solve the problem in migration areas. Try Migration Toolkit for Runtimes on your application. Make sure to report any issues you meet. Contribute to the Migration Toolkit for Runtimes rules repository: Write a Migration Toolkit for Runtimes rule to identify or automate a migration process. Create a test for the new rule. For more information, see Rule Development Guide . Contribute to the project source code: Create a core rule. Improve MTR performance or efficiency. Any level of involvement is greatly appreciated! A.4.2. Migration Toolkit for Runtimes development resources Use the following resources to learn and contribute to the Migration Toolkit for Runtimes development: MTR forums: https://developer.jboss.org/en/windup Jira issue tracker: https://issues.redhat.com/projects/WINDUP MTR mailing list: [email protected] A.4.3. Reporting issues MTR uses Jira as its issue tracking system. If you encounter an issue executing MTR, submit a Jira issue . Revised on 2024-09-12 13:44:58 UTC
[ "--input <INPUT_ARCHIVE_OR_DIRECTORY> [...]", "--output <OUTPUT_REPORT_DIRECTORY>", "Overwrite all contents of \"/home/username/<OUTPUT_REPORT_DIRECTORY>\" (anything already in the directory will be deleted)? [y,N]", "--source <SOURCE_1> <SOURCE_2>", "--target <TARGET_1> <TARGET_2>", "--packages <PACKAGE_1> <PACKAGE_2> <PACKAGE_N>" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/cli_guide/reference_material
Getting Started with Camel Spring Boot
Getting Started with Camel Spring Boot Red Hat build of Apache Camel for Spring Boot 3.20 Getting Started with Camel Spring Boot
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/getting_started_with_camel_spring_boot/index
Chapter 13. Installing on RHV
Chapter 13. Installing on RHV 13.1. Installing a cluster quickly on RHV You can quickly install a default, non-customized, OpenShift Container Platform cluster on a Red Hat Virtualization (RHV) cluster, similar to the one shown in the following diagram. The installation program uses installer-provisioned infrastructure to automate creating and deploying the cluster. To install a default cluster, you prepare the environment, run the installation program and answer its prompts. Then, the installation program creates the OpenShift Container Platform cluster. For an alternative to installing a default cluster, see Installing a cluster with customizations . Note This installation program is available for Linux and macOS only. 13.1.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on Red Hat Virtualization (RHV) . If you use a firewall, configure it to allow the sites that your cluster requires access to. 13.1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 13.1.3. Requirements for the RHV environment To install and run an OpenShift Container Platform version 4.7 cluster, the RHV environment must meet the following requirements. Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation. The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations. By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources. If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly. Requirements The RHV version is 4.4. The RHV environment has one data center whose state is Up . The RHV data center contains an RHV cluster. The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster: Minimum 28 vCPUs: four for each of the seven virtual machines created during installation. 112 GiB RAM or more, including: 16 GiB or more for the bootstrap machine, which provides the temporary control plane. 16 GiB or more for each of the three control plane machines which provide the control plane. 16 GiB or more for each of the three compute machines, which run the application workloads. The RHV storage domain must meet these etcd backend performance requirements . In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster. To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process. The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP. A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the target cluster Warning Apply the principle of least privilege: Avoid using an administrator account with SuperUser privileges on RHV during the installation process. The installation program saves the credentials you provide to a temporary ovirt-config.yaml file that might be compromised. 13.1.4. Verifying the requirements for the RHV environment Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures. Important These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly. Procedure Check that the RHV version supports installation of OpenShift Container Platform version 4.7. In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About . In the window that opens, make a note of the RHV Software Version . Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV . Inspect the data center, cluster, and storage. In the RHV Administration Portal, click Compute Data Centers . Confirm that the data center where you plan to install OpenShift Container Platform is accessible. Click the name of that data center. In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active . Record the Domain Name for use later on. Confirm Free Space has at least 230 GiB. Confirm that the storage domain meets these etcd backend performance requirements , which you can measure by using the fio performance benchmarking tool . In the data center details, click the Clusters tab. Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on. Inspect the RHV host resources. In the RHV Administration Portal, click Compute > Clusters . Click the cluster where you plan to install OpenShift Container Platform. In the cluster details, click the Hosts tab. Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster. Record the number of available Logical CPU Cores for use later on. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores. Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines: 16 GiB required for the bootstrap machine 16 GiB required for each of the three control plane machines 16 GiB for each of the three compute machines Record the amount of Max free Memory for scheduling new virtual machines for use later on. Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager's REST API. From a virtual machine on this network, use curl to reach the RHV Manager's REST API: USD curl -k -u <username>@<profile>:<password> \ 1 https://<engine-fqdn>/ovirt-engine/api 2 1 For <username> , specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For <password> , specify the password for that user name. 2 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api 13.1.5. Preparing the network environment on RHV Configure two static IP addresses for the OpenShift Container Platform cluster and create DNS entries using these addresses. Procedure Reserve two static IP addresses On the network where you plan to install OpenShift Container Platform, identify two static IP addresses that are outside the DHCP lease pool. Connect to a host on this network and verify that each of the IP addresses is not in use. For example, use Address Resolution Protocol (ARP) to check that none of the IP addresses have entries: USD arp 10.35.1.19 Example output 10.35.1.19 (10.35.1.19) -- no entry Reserve two static IP addresses following the standard practices for your network environment. Record these IP addresses for future reference. Create DNS entries for the OpenShift Container Platform REST API and apps domain names using this format: api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2 1 For <cluster-name> , <base-domain> , and <ip-address> , specify the cluster name, base domain, and static IP address of your OpenShift Container Platform API. 2 Specify the cluster name, base domain, and static IP address of your OpenShift Container Platform apps for Ingress and the load balancer. For example: api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20 13.1.6. Installing OpenShift Container Platform on RHV in insecure mode By default, the installer creates a CA certificate, prompts you for confirmation, and stores the certificate to use during installation. You do not need to create or install one manually. Although it is not recommended, you can override this functionality and install OpenShift Container Platform without verifying a certificate by installing OpenShift Container Platform on RHV in insecure mode. Warning Installing in insecure mode is not recommended, because it enables a potential attacker to perform a Man-in-the-Middle attack and capture sensitive credentials on the network. Procedure Create a file named ~/.ovirt/ovirt-config.yaml . Add the following content to ovirt-config.yaml : ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: "" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true 1 Specify the hostname or address of your oVirt engine. 2 Specify the fully qualified domain name of your oVirt engine. 3 Specify the admin password for your oVirt engine. Run the installer. 13.1.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 13.1.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 13.1.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Open the ovirt-imageio port to the Manager from the machine running the installer. By default, the port is 54322 . Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Respond to the installation program prompts. Optional: For SSH Public Key , select a password-less public key, such as ~/.ssh/id_rsa.pub . This key authenticates connections with the new OpenShift Container Platform cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, select an SSH key that your ssh-agent process uses. For Platform , select ovirt . For Engine FQDN[:PORT] , enter the fully qualified domain name (FQDN) of the RHV environment. For example: rhv-env.virtlab.example.com:443 The installer automatically generates a CA certificate. For Would you like to use the above certificate to connect to the Manager? , answer y or N . If you answer N , you must install OpenShift Container Platform in insecure mode. For Engine username , enter the user name and profile of the RHV administrator using this format: <username>@<profile> 1 1 For <username> , specify the user name of an RHV administrator. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For example: admin@internal . For Engine password , enter the RHV admin password. For Cluster , select the RHV cluster for installing OpenShift Container Platform. For Storage domain , select the storage domain for installing OpenShift Container Platform. For Network , select a virtual network that has access to the RHV Manager REST API. For Internal API Virtual IP , enter the static IP address you set aside for the cluster's REST API. For Ingress virtual IP , enter the static IP address you reserved for the wildcard apps domain. For Base Domain , enter the base domain of the OpenShift Container Platform cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: virtlab.example.com For Cluster Name , enter the name of the cluster. For example, my-cluster . Use cluster name from the externally registered/resolvable DNS entries you created for the OpenShift Container Platform REST API and apps domain names. The installation program also gives this name to the cluster in the RHV environment. For Pull Secret , copy the pull secret from the pull-secret.txt file you downloaded earlier and paste it here. You can also get a copy of the same pull secret from the Red Hat OpenShift Cluster Manager . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Important You have completed the steps required to install the cluster. The remaining steps show you how to verify the cluster and troubleshoot the installation. 13.1.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 13.1.10.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 13.1.10.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 13.1.10.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> To learn more, see Getting started with the OpenShift CLI . 13.1.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 13.1.12. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A Troubleshooting If the installation fails, the installation program times out and displays an error message. To learn more, see Troubleshooting installation issues . 13.1.13. Accessing the OpenShift Container Platform web console on RHV After the OpenShift Container Platform cluster initializes, you can log in to the OpenShift Container Platform web console. Procedure Optional: In the Red Hat Virtualization (RHV) Administration Portal, open Compute Cluster . Verify that the installation program creates the virtual machines. Return to the command line where the installation program is running. When the installation program finishes, it displays the user name and temporary password for logging into the OpenShift Container Platform web console. In a browser, open the URL of the OpenShift Container Platform web console. The URL uses this format: 1 For <clustername>.<basedomain> , specify the cluster name and base domain. For example: 13.1.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 13.1.15. Troubleshooting common issues with installing on Red Hat Virtualization (RHV) Here are some common issues you might encounter, along with proposed causes and solutions. 13.1.15.1. CPU load increases and nodes go into a Not Ready state Symptom : CPU load increases significantly and nodes start going into a Not Ready state. Cause : The storage domain latency might be too high, especially for control plane nodes (also known as the master nodes). Solution : Make the nodes ready again by restarting the kubelet service: USD systemctl restart kubelet Inspect the OpenShift Container Platform metrics service, which automatically gathers and reports on some valuable data such as the etcd disk sync duration. If the cluster is operational, use this data to help determine whether storage latency or throughput is the root issue. If so, consider using a storage resource that has lower latency and higher throughput. To get raw metrics, enter the following command as kubeadmin or user with cluster-admin privileges: USD oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics To learn more, see Exploring Application Endpoints for the purposes of Debugging with OpenShift 4.x 13.1.15.2. Trouble connecting the OpenShift Container Platform cluster API Symptom : The installation program completes but the OpenShift Container Platform cluster API is not available. The bootstrap virtual machine remains up after the bootstrap process is complete. When you enter the following command, the response will time out. USD oc login -u kubeadmin -p *** <apiurl> Cause : The bootstrap VM was not deleted by the installation program and has not released the cluster's API IP address. Solution : Use the wait-for subcommand to be notified when the bootstrap process is complete: USD ./openshift-install wait-for bootstrap-complete When the bootstrap process is complete, delete the bootstrap virtual machine: USD ./openshift-install destroy bootstrap 13.1.16. Post-installation tasks After the OpenShift Container Platform cluster initializes, you can perform the following tasks. Optional: After deployment, add or replace SSH keys using the Machine Config Operator (MCO) in OpenShift Container Platform. Optional: Remove the kubeadmin user. Instead, use the authentication provider to create a user with cluster-admin privileges. 13.2. Installing a cluster on RHV with customizations You can customize and install an OpenShift Container Platform cluster on Red Hat Virtualization (RHV), similar to the one shown in the following diagram. The installation program uses installer-provisioned infrastructure to automate creating and deploying the cluster. To install a customized cluster, you prepare the environment and perform the following steps: Create an installation configuration file, the install-config.yaml file, by running the installation program and answering its prompts. Inspect and modify parameters in the install-config.yaml file. Make a working copy of the install-config.yaml file. Run the installation program with a copy of the install-config.yaml file. Then, the installation program creates the OpenShift Container Platform cluster. For an alternative to installing a customized cluster, see Installing a default cluster . Note This installation program is available for Linux and macOS only. 13.2.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on Red Hat Virtualization (RHV) . If you use a firewall, configure it to allow the sites that your cluster requires access to. 13.2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 13.2.3. Requirements for the RHV environment To install and run an OpenShift Container Platform version 4.7 cluster, the RHV environment must meet the following requirements. Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation. The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations. By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources. If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly. Requirements The RHV version is 4.4. The RHV environment has one data center whose state is Up . The RHV data center contains an RHV cluster. The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster: Minimum 28 vCPUs: four for each of the seven virtual machines created during installation. 112 GiB RAM or more, including: 16 GiB or more for the bootstrap machine, which provides the temporary control plane. 16 GiB or more for each of the three control plane machines which provide the control plane. 16 GiB or more for each of the three compute machines, which run the application workloads. The RHV storage domain must meet these etcd backend performance requirements . In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster. To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process. The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP. A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the target cluster Warning Apply the principle of least privilege: Avoid using an administrator account with SuperUser privileges on RHV during the installation process. The installation program saves the credentials you provide to a temporary ovirt-config.yaml file that might be compromised. 13.2.4. Verifying the requirements for the RHV environment Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures. Important These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly. Procedure Check that the RHV version supports installation of OpenShift Container Platform version 4.7. In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About . In the window that opens, make a note of the RHV Software Version . Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV . Inspect the data center, cluster, and storage. In the RHV Administration Portal, click Compute Data Centers . Confirm that the data center where you plan to install OpenShift Container Platform is accessible. Click the name of that data center. In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active . Record the Domain Name for use later on. Confirm Free Space has at least 230 GiB. Confirm that the storage domain meets these etcd backend performance requirements , which you can measure by using the fio performance benchmarking tool . In the data center details, click the Clusters tab. Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on. Inspect the RHV host resources. In the RHV Administration Portal, click Compute > Clusters . Click the cluster where you plan to install OpenShift Container Platform. In the cluster details, click the Hosts tab. Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster. Record the number of available Logical CPU Cores for use later on. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores. Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines: 16 GiB required for the bootstrap machine 16 GiB required for each of the three control plane machines 16 GiB for each of the three compute machines Record the amount of Max free Memory for scheduling new virtual machines for use later on. Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager's REST API. From a virtual machine on this network, use curl to reach the RHV Manager's REST API: USD curl -k -u <username>@<profile>:<password> \ 1 https://<engine-fqdn>/ovirt-engine/api 2 1 For <username> , specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For <password> , specify the password for that user name. 2 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api 13.2.5. Preparing the network environment on RHV Configure two static IP addresses for the OpenShift Container Platform cluster and create DNS entries using these addresses. Procedure Reserve two static IP addresses On the network where you plan to install OpenShift Container Platform, identify two static IP addresses that are outside the DHCP lease pool. Connect to a host on this network and verify that each of the IP addresses is not in use. For example, use Address Resolution Protocol (ARP) to check that none of the IP addresses have entries: USD arp 10.35.1.19 Example output 10.35.1.19 (10.35.1.19) -- no entry Reserve two static IP addresses following the standard practices for your network environment. Record these IP addresses for future reference. Create DNS entries for the OpenShift Container Platform REST API and apps domain names using this format: api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2 1 For <cluster-name> , <base-domain> , and <ip-address> , specify the cluster name, base domain, and static IP address of your OpenShift Container Platform API. 2 Specify the cluster name, base domain, and static IP address of your OpenShift Container Platform apps for Ingress and the load balancer. For example: api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20 13.2.6. Installing OpenShift Container Platform on RHV in insecure mode By default, the installer creates a CA certificate, prompts you for confirmation, and stores the certificate to use during installation. You do not need to create or install one manually. Although it is not recommended, you can override this functionality and install OpenShift Container Platform without verifying a certificate by installing OpenShift Container Platform on RHV in insecure mode. Warning Installing in insecure mode is not recommended, because it enables a potential attacker to perform a Man-in-the-Middle attack and capture sensitive credentials on the network. Procedure Create a file named ~/.ovirt/ovirt-config.yaml . Add the following content to ovirt-config.yaml : ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: "" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true 1 Specify the hostname or address of your oVirt engine. 2 Specify the fully qualified domain name of your oVirt engine. 3 Specify the admin password for your oVirt engine. Run the installer. 13.2.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 13.2.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 13.2.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat Virtualization (RHV). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Respond to the installation program prompts. For SSH Public Key , select a password-less public key, such as ~/.ssh/id_rsa.pub . This key authenticates connections with the new OpenShift Container Platform cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, select an SSH key that your ssh-agent process uses. For Platform , select ovirt . For Enter oVirt's API endpoint URL , enter the URL of the RHV API using this format: https://<engine-fqdn>/ovirt-engine/api 1 1 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api For Is the oVirt CA trusted locally? , enter Yes , because you have already set up a CA certificate. Otherwise, enter No . For oVirt's CA bundle , if you entered Yes for the preceding question, copy the certificate content from /etc/pki/ca-trust/source/anchors/ca.pem and paste it here. Then, press Enter twice. Otherwise, if you entered No for the preceding question, this question does not appear. For oVirt engine username , enter the user name and profile of the RHV administrator using this format: <username>@<profile> 1 1 For <username> , specify the user name of an RHV administrator. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. Together, the user name and profile should look similar to this example: ocpadmin@internal For oVirt engine password , enter the RHV admin password. For oVirt cluster , select the cluster for installing OpenShift Container Platform. For oVirt storage domain , select the storage domain for installing OpenShift Container Platform. For oVirt network , select a virtual network that has access to the RHV Manager REST API. For Internal API Virtual IP , enter the static IP address you set aside for the cluster's REST API. For Ingress virtual IP , enter the static IP address you reserved for the wildcard apps domain. For Base Domain , enter the base domain of the OpenShift Container Platform cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: virtlab.example.com For Cluster Name , enter the name of the cluster. For example, my-cluster . Use cluster name from the externally registered/resolvable DNS entries you created for the OpenShift Container Platform REST API and apps domain names. The installation program also gives this name to the cluster in the RHV environment. For Pull Secret , copy the pull secret from the pull-secret.txt file you downloaded earlier and paste it here. You can also get a copy of the same pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you have any intermediate CA certificates on the Manager, verify that the certificates appear in the ovirt-config.yaml file and the install-config.yaml file. If they do not appear, add them as follows: In the ~/.ovirt/ovirt-config.yaml file: [ovirt_ca_bundle]: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <INTERMEDIATE_CA> -----END CERTIFICATE----- In the install-config.yaml file: [additionalTrustBundle]: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <INTERMEDIATE_CA> -----END CERTIFICATE----- Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 13.2.9.1. Example install-config.yaml files for Red Hat Virtualization (RHV) You can customize the OpenShift Container Platform cluster the installation program creates by changing the parameters and parameter values in the install-config.yaml file. The following example is specific to installing OpenShift Container Platform on RHV. This file is located in the <installation_directory> you specified when you ran the following command. USD ./openshift-install create install-config --dir <installation_directory> Note These example files are provided for reference only. You must obtain your install-config.yaml file by using the installation program. Changing the install-config.yaml file can increase the resources your cluster requires. Verify that your RHV environment has those additional resources. Otherwise, the installation or cluster will fail. Example: This is the default install-config.yaml file apiVersion: v1 baseDomain: example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: my-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: ovirt: api_vip: 10.46.8.230 ingress_vip: 192.168.1.5 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 publish: External pullSecret: '{"auths": ...}' sshKey: ssh-ed12345 AAAA... Example: A minimal install-config.yaml file apiVersion: v1 baseDomain: example.com metadata: name: test-cluster platform: ovirt: api_vip: 10.46.8.230 ingress_vip: 10.46.8.232 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 pullSecret: '{"auths": ...}' sshKey: ssh-ed12345 AAAA... Example: Custom machine pools in an install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: ovirt: cpu: cores: 4 sockets: 2 memoryMB: 65536 osDisk: sizeGB: 100 vmType: server replicas: 3 compute: - name: worker platform: ovirt: cpu: cores: 4 sockets: 4 memoryMB: 65536 osDisk: sizeGB: 200 vmType: server replicas: 5 metadata: name: test-cluster platform: ovirt: api_vip: 10.46.8.230 ingress_vip: 10.46.8.232 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 13.2.9.2. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 13.2.9.2.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 13.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 13.2.9.2.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 13.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 13.2.9.2.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 13.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 13.2.9.2.4. Additional Red Hat Virtualization (RHV) configuration parameters Additional RHV configuration parameters are described in the following table: Table 13.4. Additional RHV parameters for clusters Parameter Description Values platform.ovirt.ovirt_cluster_id Required. The Cluster where the VMs will be created. String. For example: 68833f9f-e89c-4891-b768-e2ba0815b76b platform.ovirt.ovirt_storage_domain_id Required. The Storage Domain ID where the VM disks will be created. String. For example: ed7b0f4e-0e96-492a-8fff-279213ee1468 platform.ovirt.ovirt_network_name Required. The network name where the VM nics will be created. String. For example: ocpcluster platform.ovirt.vnicProfileID Required. The vNIC profile ID of the VM network interfaces. This can be inferred if the cluster network has a single profile. String. For example: 3fa86930-0be5-4052-b667-b79f0a729692 platform.ovirt.api_vip Required. An IP address on the machine network that will be assigned to the API virtual IP (VIP). You can access the OpenShift API at this endpoint. String. Example: 10.46.8.230 platform.ovirt.ingress_vip Required. An IP address on the machine network that will be assigned to the Ingress virtual IP (VIP). String. Example: 10.46.8.232 13.2.9.2.5. Additional RHV parameters for machine pools Additional RHV configuration parameters for machine pools are described in the following table: Table 13.5. Additional RHV parameters for machine pools Parameter Description Values <machine-pool>.platform.ovirt.cpu Optional. Defines the CPU of the VM. Object <machine-pool>.platform.ovirt.cpu.cores Required if you use <machine-pool>.platform.ovirt.cpu . The number of cores. Total virtual CPUs (vCPUs) is cores * sockets. Integer <machine-pool>.platform.ovirt.cpu.sockets Required if you use <machine-pool>.platform.ovirt.cpu . The number of sockets per core. Total virtual CPUs (vCPUs) is cores * sockets. Integer <machine-pool>.platform.ovirt.memoryMB Optional. Memory of the VM in MiB. Integer <machine-pool>.platform.ovirt.instanceTypeID Optional. An instance type UUID, such as 00000009-0009-0009-0009-0000000000f1 , which you can get from the https://<engine-fqdn>/ovirt-engine/api/instancetypes endpoint. Warning The instance_type_id field is deprecated and will be removed in a future release. String of UUID <machine-pool>.platform.ovirt.osDisk Optional. Defines the first and bootable disk of the VM. String <machine-pool>.platform.ovirt.osDisk.sizeGB Required if you use <machine-pool>.platform.ovirt.osDisk . Size of the disk in GiB. Number <machine-pool>.platform.ovirt.vmType Optional. The VM workload type, such as high-performance , server , or desktop . By default, control plane nodes use high-performance , and worker nodes use server . For details, see Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows and Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide . Note high_performance improves performance on the VM, but there are limitations. For example, you cannot access the VM with a graphical console. For more information see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide . String Note You can replace <machine-pool> with controlPlane or compute . 13.2.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Open the ovirt-imageio port to the Manager from the machine running the installer. By default, the port is 54322 . Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Important You have completed the steps required to install the cluster. The remaining steps show you how to verify the cluster and troubleshoot the installation. 13.2.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 13.2.11.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 13.2.11.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 13.2.11.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 13.2.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin To learn more, see Getting started with the OpenShift CLI . 13.2.13. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A Troubleshooting If the installation fails, the installation program times out and displays an error message. To learn more, see Troubleshooting installation issues . 13.2.14. Accessing the OpenShift Container Platform web console on RHV After the OpenShift Container Platform cluster initializes, you can log in to the OpenShift Container Platform web console. Procedure Optional: In the Red Hat Virtualization (RHV) Administration Portal, open Compute Cluster . Verify that the installation program creates the virtual machines. Return to the command line where the installation program is running. When the installation program finishes, it displays the user name and temporary password for logging into the OpenShift Container Platform web console. In a browser, open the URL of the OpenShift Container Platform web console. The URL uses this format: 1 For <clustername>.<basedomain> , specify the cluster name and base domain. For example: 13.2.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 13.2.16. Troubleshooting common issues with installing on Red Hat Virtualization (RHV) Here are some common issues you might encounter, along with proposed causes and solutions. 13.2.16.1. CPU load increases and nodes go into a Not Ready state Symptom : CPU load increases significantly and nodes start going into a Not Ready state. Cause : The storage domain latency might be too high, especially for control plane nodes (also known as the master nodes). Solution : Make the nodes ready again by restarting the kubelet service: USD systemctl restart kubelet Inspect the OpenShift Container Platform metrics service, which automatically gathers and reports on some valuable data such as the etcd disk sync duration. If the cluster is operational, use this data to help determine whether storage latency or throughput is the root issue. If so, consider using a storage resource that has lower latency and higher throughput. To get raw metrics, enter the following command as kubeadmin or user with cluster-admin privileges: USD oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics To learn more, see Exploring Application Endpoints for the purposes of Debugging with OpenShift 4.x 13.2.16.2. Trouble connecting the OpenShift Container Platform cluster API Symptom : The installation program completes but the OpenShift Container Platform cluster API is not available. The bootstrap virtual machine remains up after the bootstrap process is complete. When you enter the following command, the response will time out. USD oc login -u kubeadmin -p *** <apiurl> Cause : The bootstrap VM was not deleted by the installation program and has not released the cluster's API IP address. Solution : Use the wait-for subcommand to be notified when the bootstrap process is complete: USD ./openshift-install wait-for bootstrap-complete When the bootstrap process is complete, delete the bootstrap virtual machine: USD ./openshift-install destroy bootstrap 13.2.17. Post-installation tasks After the OpenShift Container Platform cluster initializes, you can perform the following tasks. Optional: After deployment, add or replace SSH keys using the Machine Config Operator (MCO) in OpenShift Container Platform. Optional: Remove the kubeadmin user. Instead, use the authentication provider to create a user with cluster-admin privileges. 13.2.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 13.3. Installing a cluster on RHV with user-provisioned infrastructure In OpenShift Container Platform version 4.7, you can install a customized OpenShift Container Platform cluster on Red Hat Virtualization (RHV) and other infrastructure that you provide. The OpenShift Container Platform documentation uses the term user-provisioned infrastructure to refer to this infrastructure type. The following diagram shows an example of a potential OpenShift Container Platform cluster running on a RHV cluster. The RHV hosts run virtual machines that contain both control plane and compute pods. One of the hosts also runs a Manager virtual machine and a bootstrap virtual machine that contains a temporary control plane pod.] 13.3.1. Prerequisites The following items are required to install an OpenShift Container Platform cluster on a RHV environment. You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on RHV . You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on Red Hat Virtualization (RHV) . You are familiar with the OpenShift Container Platform installation and update processes. 13.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 13.3.3. Requirements for the RHV environment To install and run an OpenShift Container Platform version 4.7 cluster, the RHV environment must meet the following requirements. Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation. The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations. By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources. If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly. Requirements The RHV version is 4.4. The RHV environment has one data center whose state is Up . The RHV data center contains an RHV cluster. The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster: Minimum 28 vCPUs: four for each of the seven virtual machines created during installation. 112 GiB RAM or more, including: 16 GiB or more for the bootstrap machine, which provides the temporary control plane. 16 GiB or more for each of the three control plane machines which provide the control plane. 16 GiB or more for each of the three compute machines, which run the application workloads. The RHV storage domain must meet these etcd backend performance requirements . In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster. To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process. The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP. A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the target cluster Warning Apply the principle of least privilege: Avoid using an administrator account with SuperUser privileges on RHV during the installation process. The installation program saves the credentials you provide to a temporary ovirt-config.yaml file that might be compromised. 13.3.4. Verifying the requirements for the RHV environment Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures. Important These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly. Procedure Check that the RHV version supports installation of OpenShift Container Platform version 4.7. In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About . In the window that opens, make a note of the RHV Software Version . Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV . Inspect the data center, cluster, and storage. In the RHV Administration Portal, click Compute Data Centers . Confirm that the data center where you plan to install OpenShift Container Platform is accessible. Click the name of that data center. In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active . Record the Domain Name for use later on. Confirm Free Space has at least 230 GiB. Confirm that the storage domain meets these etcd backend performance requirements , which you can measure by using the fio performance benchmarking tool . In the data center details, click the Clusters tab. Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on. Inspect the RHV host resources. In the RHV Administration Portal, click Compute > Clusters . Click the cluster where you plan to install OpenShift Container Platform. In the cluster details, click the Hosts tab. Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster. Record the number of available Logical CPU Cores for use later on. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores. Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines: 16 GiB required for the bootstrap machine 16 GiB required for each of the three control plane machines 16 GiB for each of the three compute machines Record the amount of Max free Memory for scheduling new virtual machines for use later on. Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager's REST API. From a virtual machine on this network, use curl to reach the RHV Manager's REST API: USD curl -k -u <username>@<profile>:<password> \ 1 https://<engine-fqdn>/ovirt-engine/api 2 1 For <username> , specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For <password> , specify the password for that user name. 2 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api 13.3.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. Ensure that the machines have persistent IP addresses and host names. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster. Firewall Configure your firewall so your cluster has access to required sites. See also: Red Hat Virtualization Manager firewall requirements Host firewall requirements Load balancers Configure one or preferably two layer-4 load balancers: Provide load balancing for ports 6443 and 22623 on the control plane and bootstrap machines. Port 6443 provides access to the Kubernetes API server and must be reachable both internally and externally. Port 22623 must be accessible to nodes within the cluster. Provide load balancing for port 443 and 80 for machines that run the Ingress router, which are usually compute nodes in the default configuration. Both ports must be accessible from within and outside the cluster. DNS Configure infrastructure-provided DNS to allow the correct resolution of the main components and services. If you use only one load balancer, these DNS records can point to the same IP address. Create DNS records for api.<cluster_name>.<base_domain> (internal and external resolution) and api-int.<cluster_name>.<base_domain> (internal resolution) that point to the load balancer for the control plane machines. Create a DNS record for *.apps.<cluster_name>.<base_domain> that points to the load balancer for the Ingress router. For example, ports 443 and 80 of the compute machines. Table 13.6. All machines to all machines Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . TCP/UDP 30000 - 32767 Kubernetes node port Table 13.7. All machines to control plane Protocol Port Description TCP 6443 Kubernetes API Table 13.8. Control plane machines to control plane machines Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Network topology requirements The infrastructure that you provision for your cluster must meet the following network topology requirements. Important OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Load balancers Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 13.9. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Configure the following ports on both the front and back of the load balancers: Table 13.10. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTP traffic Tip If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. NTP configuration OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . 13.3.6. Setting up the installation machine To run the binary openshift-install installation program and Ansible scripts, set up the RHV Manager or an Red Hat Enterprise Linux (RHEL) computer with network access to the RHV environment and the REST API on the Manager. Procedure Update or install Python3 and Ansible. For example: # dnf update python3 ansible Install the python3-ovirt-engine-sdk4 package to get the Python Software Development Kit. Install the ovirt.image-template Ansible role. On the RHV Manager and other Red Hat Enterprise Linux (RHEL) machines, this role is distributed as the ovirt-ansible-image-template package. For example, enter: # dnf install ovirt-ansible-image-template Install the ovirt.vm-infra Ansible role. On the RHV Manager and other RHEL machines, this role is distributed as the ovirt-ansible-vm-infra package. # dnf install ovirt-ansible-vm-infra Create an environment variable and assign an absolute or relative path to it. For example, enter: USD export ASSETS_DIR=./wrk Note The installation program uses this variable to create a directory where it saves important installation-related files. Later, the installation process reuses this variable to locate those asset files. Avoid deleting this assets directory; it is required for uninstalling the cluster. 13.3.7. Installing OpenShift Container Platform on RHV in insecure mode By default, the installer creates a CA certificate, prompts you for confirmation, and stores the certificate to use during installation. You do not need to create or install one manually. Although it is not recommended, you can override this functionality and install OpenShift Container Platform without verifying a certificate by installing OpenShift Container Platform on RHV in insecure mode. Warning Installing in insecure mode is not recommended, because it enables a potential attacker to perform a Man-in-the-Middle attack and capture sensitive credentials on the network. Procedure Create a file named ~/.ovirt/ovirt-config.yaml . Add the following content to ovirt-config.yaml : ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: "" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true 1 Specify the hostname or address of your oVirt engine. 2 Specify the fully qualified domain name of your oVirt engine. 3 Specify the admin password for your oVirt engine. Run the installer. 13.3.8. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 13.3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 13.3.10. Downloading the Ansible playbooks Download the Ansible playbooks for installing OpenShift Container Platform version 4.7 on RHV. Procedure On your installation machine, run the following commands: USD mkdir playbooks USD cd playbooks USD curl -s -L -X GET https://api.github.com/repos/openshift/installer/contents/upi/ovirt?ref=release-4.7 | grep 'download_url.*\.yml' | awk '{ print USD2 }' | sed -r 's/("|",)//g' | xargs -n 1 curl -O steps After you download these Ansible playbooks, you must also create the environment variable for the assets directory and customize the inventory.yml file before you create an installation configuration file by running the installation program. 13.3.11. The inventory.yml file You use the inventory.yml file to define and create elements of the OpenShift Container Platform cluster you are installing. This includes elements such as the Red Hat Enterprise Linux CoreOS (RHCOS) image, virtual machine templates, bootstrap machine, control plane nodes, and worker nodes. You also use inventory.yml to destroy the cluster. The following inventory.yml example shows you the parameters and their default values. The quantities and numbers in these default values meet the requirements for running a production OpenShift Container Platform cluster in a RHV environment. Example inventory.yml file --- all: vars: ovirt_cluster: "Default" ocp: assets_dir: "{{ lookup('env', 'ASSETS_DIR') }}" ovirt_config_path: "{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml" # --- # {op-system} section # --- rhcos: image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz" local_cmp_image_path: "/tmp/rhcos.qcow2.gz" local_image_path: "/tmp/rhcos.qcow2" # --- # Profiles section # --- control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: "{{ metadata.infraID }}-bootstrap" ocp_type: bootstrap profile: "{{ control_plane }}" type: server - name: "{{ metadata.infraID }}-master0" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master1" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master2" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-worker0" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker1" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker2" ocp_type: worker profile: "{{ compute }}" Important Enter values for parameters whose descriptions begin with "Enter." Otherwise, you can use the default value or replace it with a new value. General section ovirt_cluster : Enter the name of an existing RHV cluster in which to install the OpenShift Container Platform cluster. ocp.assets_dir : The path of a directory the openshift-install installation program creates to store the files that it generates. ocp.ovirt_config_path : The path of the ovirt-config.yaml file the installation program generates, for example, ./wrk/install-config.yaml . This file contains the credentials required to interact with the REST API of the Manager. Red Hat Enterprise Linux CoreOS (RHCOS) section image_url : Enter the URL of the RHCOS image you specified for download. local_cmp_image_path : The path of a local download directory for the compressed RHCOS image. local_image_path : The path of a local directory for the extracted RHCOS image. Profiles section This section consists of two profiles: control_plane : The profile of the bootstrap and control plane nodes. compute : The profile of workers nodes in the compute plane. These profiles have the following parameters. The default values of the parameters meet the minimum requirements for running a production cluster. You can increase or customize these values to meet your workload requirements. cluster : The value gets the cluster name from ovirt_cluster in the General Section. memory : The amount of memory, in GB, for the virtual machine. sockets : The number of sockets for the virtual machine. cores : The number of cores for the virtual machine. template : The name of the virtual machine template. If plan to install multiple clusters, and these clusters use templates that contain different specifications, prepend the template name with the ID of the cluster. operating_system : The type of guest operating system in the virtual machine. With oVirt/RHV version 4.4, this value must be rhcos_x64 so the value of Ignition script can be passed to the VM. type : Enter server as the type of the virtual machine. Important You must change the value of the type parameter from high_performance to server . disks : The disk specifications. The control_plane and compute nodes can have different storage domains. size : The minimum disk size. name : Enter the name of a disk connected to the target cluster in RHV. interface : Enter the interface type of the disk you specified. storage_domain : Enter the storage domain of the disk you specified. nics : Enter the name and network the virtual machines use. You can also specify the virtual network interface profile. By default, NICs obtain their MAC addresses from the oVirt/RHV MAC pool. Virtual machines section This final section, vms , defines the virtual machines you plan to create and deploy in the cluster. By default, it provides the minimum number of control plane and worker nodes for a production environment. vms contains three required elements: name : The name of the virtual machine. In this case, metadata.infraID prepends the virtual machine name with the infrastructure ID from the metadata.yml file. ocp_type : The role of the virtual machine in the OCP cluster. Possible values are bootstrap , master , worker . profile : The name of the profile from which each virtual machine inherits specifications. Possible values in this example are control_plane or compute . You can override the value a virtual machine inherits from its profile. To do this, you add the name of the profile attribute to the virtual machine in inventory.yml and assign it an overriding value. To see an example of this, examine the name: "{{ metadata.infraID }}-bootstrap" virtual machine in the preceding inventory.yml example: It has a type attribute whose value, server , overrides the value of the type attribute this virtual machine would otherwise inherit from the control_plane profile. Metadata variables For virtual machines, metadata.infraID prepends the name of the virtual machine with the infrastructure ID from the metadata.json file you create when you build the Ignition files. The playbooks use the following code to read infraID from the specific file located in the ocp.assets_dir . --- - name: include metadata.json vars include_vars: file: "{{ ocp.assets_dir }}/metadata.json" name: metadata ... 13.3.12. Specifying the RHCOS image settings Update the Red Hat Enterprise Linux CoreOS (RHCOS) image settings of the inventory.yml file. Later, when you run this file one of the playbooks, it downloads a compressed Red Hat Enterprise Linux CoreOS (RHCOS) image from the image_url URL to the local_cmp_image_path directory. The playbook then uncompresses the image to the local_image_path directory and uses it to create oVirt/RHV templates. Procedure Locate the RHCOS image download page for the version of OpenShift Container Platform you are installing, such as Index of /pub/openshift-v4/dependencies/rhcos/latest/latest . From that download page, copy the URL of an OpenStack qcow2 image, such as https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz . Edit the inventory.yml playbook you downloaded earlier. In it, paste the URL as the value for image_url . For example: rhcos: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz" 13.3.13. Creating the install config file You create an installation configuration file by running the installation program, openshift-install , and responding to its prompts with information you specified or gathered earlier. When you finish responding to the prompts, the installation program creates an initial version of the install-config.yaml file in the assets directory you specified earlier, for example, ./wrk/install-config.yaml The installation program also creates a file, USDHOME/.ovirt/ovirt-config.yaml , that contains all the connection parameters that are required to reach the Manager and use its REST API. NOTE: The installation process does not use values you supply for some parameters, such as Internal API virtual IP and Ingress virtual IP , because you have already configured them in your infrastructure DNS. It also uses the values you supply for parameters in inventory.yml , like the ones for oVirt cluster , oVirt storage , and oVirt network . And uses a script to remove or replace these same values from install-config.yaml with the previously mentioned virtual IPs . Procedure Run the installation program: USD openshift-install create install-config --dir USDASSETS_DIR Respond to the installation program's prompts with information about your system. Example output ? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********> For Internal API virtual IP and Ingress virtual IP , supply the IP addresses you specified when you configured the DNS service. Together, the values you enter for the oVirt cluster and Base Domain prompts form the FQDN portion of URLs for the REST API and any applications you create, such as https://api.ocp4.example.org:6443/ and https://console-openshift-console.apps.ocp4.example.org . You can get the pull secret from the Red Hat OpenShift Cluster Manager . 13.3.14. Customizing install-config.yaml Here, you use three Python scripts to override some of the installation program's default behaviors: By default, the installation program uses the machine API to create nodes. To override this default behavior, you set the number of compute nodes to zero replicas. Later, you use Ansible playbooks to create the compute nodes. By default, the installation program sets the IP range of the machine network for nodes. To override this default behavior, you set the IP range to match your infrastructure. By default, the installation program sets the platform to ovirt . However, installing a cluster on user-provisioned infrastructure is more similar to installing a cluster on bare metal. Therefore, you delete the ovirt platform section from install-config.yaml and change the platform to none . Instead, you use inventory.yml to specify all of the required settings. Note These snippets work with Python 3 and Python 2. Procedure Set the number of compute nodes to zero replicas: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["compute"][0]["replicas"] = 0 open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Set the IP range of the machine network. For example, to set the range to 172.16.0.0/16 , enter: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["networking"]["machineNetwork"][0]["cidr"] = "172.16.0.0/16" open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Remove the ovirt section and change the platform to none : USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) platform = conf["platform"] del platform["ovirt"] platform["none"] = {} open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Warning Red Hat Virtualization does not currently support installation with user-provisioned infrastructure on the oVirt platform. Therefore, you must set the platform to none , allowing OpenShift Container Platform to identify each node as a bare-metal node and the cluster as a bare-metal cluster. This is the same as installing a cluster on any platform , and has the following limitations: There will be no cluster provider so you must manually add each machine and there will be no node scaling capabilities. The oVirt CSI driver will not be installed and there will be no CSI capabilities. 13.3.15. Generate manifest files Use the installation program to generate a set of manifest files in the assets directory. The command to generate the manifest files displays a warning message before it consumes the install-config.yaml file. If you plan to reuse the install-config.yaml file, create a backup copy of it before you back it up before you generate the manifest files. Procedure Optional: Create a backup copy of the install-config.yaml file: USD cp install-config.yaml install-config.yaml.backup Generate a set of manifests in your assets directory: USD openshift-install create manifests --dir USDASSETS_DIR This command displays the following messages. Example output INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings The command generates the following manifest files: Example output USD tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml steps Make control plane nodes non-schedulable. 13.3.16. Making control-plane nodes non-schedulable Because you are manually creating and deploying the control plane machines, you must configure a manifest file to make the control plane nodes non-schedulable. Procedure To make the control plane nodes non-schedulable, enter: USD python3 -c 'import os, yaml path = "%s/manifests/cluster-scheduler-02-config.yml" % os.environ["ASSETS_DIR"] data = yaml.safe_load(open(path)) data["spec"]["mastersSchedulable"] = False open(path, "w").write(yaml.dump(data, default_flow_style=False))' 13.3.17. Building the Ignition files To build the Ignition files from the manifest files you just generated and modified, you run the installation program. This action creates a Red Hat Enterprise Linux CoreOS (RHCOS) machine, initramfs , which fetches the Ignition files and performs the configurations needed to create a node. In addition to the Ignition files, the installation program generates the following: An auth directory that contains the admin credentials for connecting to the cluster with the oc and kubectl utilities. A metadata.json file that contains information such as the OpenShift Container Platform cluster name, cluster ID, and infrastructure ID for the current installation. The Ansible playbooks for this installation process use the value of infraID as a prefix for the virtual machines they create. This prevents naming conflicts when there are multiple installations in the same oVirt/RHV cluster. Note Certificates in Ignition configuration files expire after 24 hours. Complete the cluster installation and keep the cluster running in a non-degraded state for 24 hours so that the first certificate rotation can finish. Procedure To build the Ignition files, enter: USD openshift-install create ignition-configs --dir USDASSETS_DIR Example output USD tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign 13.3.18. Creating templates and virtual machines After confirming the variables in the inventory.yml , you run the first Ansible provisioning playbook, create-templates-and-vms.yml . This playbook uses the connection parameters for the RHV Manager from USDHOME/.ovirt/ovirt-config.yaml and reads metadata.json in the assets directory. If a local Red Hat Enterprise Linux CoreOS (RHCOS) image is not already present, the playbook downloads one from the URL you specified for image_url in inventory.yml . It extracts the image and uploads it to RHV to create templates. The playbook creates a template based on the control_plane and compute profiles in the inventory.yml file. If these profiles have different names, it creates two templates. When the playbook finishes, the virtual machines it creates are stopped. You can get information from them to help configure other infrastructure elements. For example, you can get the virtual machines' MAC addresses to configure DHCP to assign permanent IP addresses to the virtual machines. Procedure In inventory.yml , under the control_plane and compute variables, change both instances of type: high_performance to type: server . Optional: If you plan to perform multiple installations to the same cluster, create different templates for each OCP installation. In the inventory.yml file, prepend the value of template with infraID . For example: control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: "{{ metadata.infraID }}-rhcos_tpl" operating_system: "rhcos_x64" ... Create the templates and virtual machines: USD ansible-playbook -i inventory.yml create-templates-and-vms.yml 13.3.19. Creating the bootstrap machine You create a bootstrap machine by running the bootstrap.yml playbook. This playbook starts the bootstrap virtual machine, and passes it the bootstrap.ign Ignition file from the assets directory. The bootstrap node configures itself so it can serve Ignition files to the control plane nodes. To monitor the bootstrap process, you use the console in the RHV Administration Portal or connect to the virtual machine by using SSH. Procedure Create the bootstrap machine: USD ansible-playbook -i inventory.yml bootstrap.yml Connect to the bootstrap machine using a console in the Administration Portal or SSH. Replace <bootstrap_ip> with the bootstrap node IP address. To use SSH, enter: USD ssh core@<boostrap.ip> Collect bootkube.service journald unit logs for the release image service from the bootstrap node: [core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. 13.3.20. Creating the control plane nodes You create the control plane nodes by running the masters.yml playbook. This playbook passes the master.ign Ignition file to each of the virtual machines. The Ignition file contains a directive for the control plane node to get the Ignition from a URL such as https://api-int.ocp4.example.org:22623/config/master . The port number in this URL is managed by the load balancer, and is accessible only inside the cluster. Procedure Create the control plane nodes: USD ansible-playbook -i inventory.yml masters.yml While the playbook creates your control plane, monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR Example output INFO API v1.18.3+b74c5ed up INFO Waiting up to 40m0s for bootstrapping to complete... When all the pods on the control plane nodes and etcd are up and running, the installation program displays the following output. Example output INFO It is now safe to remove the bootstrap resources 13.3.21. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 13.3.22. Removing the bootstrap machine After the wait-for command shows that the bootstrap process is complete, you must remove the bootstrap virtual machine to free up compute, memory, and storage resources. Also, remove settings for the bootstrap machine from the load balancer directives. Procedure To remove the bootstrap machine from the cluster, enter: USD ansible-playbook -i inventory.yml retire-bootstrap.yml Remove settings for the bootstrap machine from the load balancer directives. 13.3.23. Creating the worker nodes and completing the installation Creating worker nodes is similar to creating control plane nodes. However, worker nodes workers do not automatically join the cluster. To add them to the cluster, you review and approve the workers' pending CSRs (Certificate Signing Requests). After approving the first requests, you continue approving CSR until all of the worker nodes are approved. When you complete this process, the worker nodes become Ready and can have pods scheduled to run on them. Finally, monitor the command line to see when the installation process completes. Procedure Create the worker nodes: USD ansible-playbook -i inventory.yml workers.yml To list all of the CSRs, enter: USD oc get csr -A Eventually, this command displays one CSR per node. For example: Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending To filter the list and see only pending CSRs, enter: USD watch "oc get csr -A | grep pending -i" This command refreshes the output every two seconds and displays only pending CSRs. For example: Example output Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending Inspect each pending request. For example: Example output USD oc describe csr csr-m724n Example output Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none> If the CSR information is correct, approve the request: USD oc adm certificate approve csr-m724n Wait for the installation process to finish: USD openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug When the installation completes, the command line displays the URL of the OpenShift Container Platform web console and the administrator user name and password. 13.3.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 13.4. Installing a cluster on RHV in a restricted network In OpenShift Container Platform version 4.7, you can install a customized OpenShift Container Platform cluster on Red Hat Virtualization (RHV) in a restricted network by creating an internal mirror of the installation release content. 13.4.1. Prerequisites The following items are required to install an OpenShift Container Platform cluster on a RHV environment. You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on RHV . You are familiar with the OpenShift Container Platform installation and update processes. Create a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Review details about the OpenShift Container Platform installation and update processes. If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 13.4.2. About installations in restricted networks In OpenShift Container Platform 4.7, you can perform an installation that does not require an active connection to the Internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less Internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the Internet and your closed network, or by using other methods that meet your restrictions. 13.4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 13.4.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to obtain the images that are necessary to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 13.4.4. Requirements for the RHV environment To install and run an OpenShift Container Platform version 4.7 cluster, the RHV environment must meet the following requirements. Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation. The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations. By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources. If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly. Requirements The RHV version is 4.4. The RHV environment has one data center whose state is Up . The RHV data center contains an RHV cluster. The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster: Minimum 28 vCPUs: four for each of the seven virtual machines created during installation. 112 GiB RAM or more, including: 16 GiB or more for the bootstrap machine, which provides the temporary control plane. 16 GiB or more for each of the three control plane machines which provide the control plane. 16 GiB or more for each of the three compute machines, which run the application workloads. The RHV storage domain must meet these etcd backend performance requirements . In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster. To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process. The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP. A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the target cluster Warning Apply the principle of least privilege: Avoid using an administrator account with SuperUser privileges on RHV during the installation process. The installation program saves the credentials you provide to a temporary ovirt-config.yaml file that might be compromised. 13.4.5. Verifying the requirements for the RHV environment Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures. Important These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly. Procedure Check that the RHV version supports installation of OpenShift Container Platform version 4.7. In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About . In the window that opens, make a note of the RHV Software Version . Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV . Inspect the data center, cluster, and storage. In the RHV Administration Portal, click Compute Data Centers . Confirm that the data center where you plan to install OpenShift Container Platform is accessible. Click the name of that data center. In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active . Record the Domain Name for use later on. Confirm Free Space has at least 230 GiB. Confirm that the storage domain meets these etcd backend performance requirements , which you can measure by using the fio performance benchmarking tool . In the data center details, click the Clusters tab. Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on. Inspect the RHV host resources. In the RHV Administration Portal, click Compute > Clusters . Click the cluster where you plan to install OpenShift Container Platform. In the cluster details, click the Hosts tab. Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster. Record the number of available Logical CPU Cores for use later on. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores. Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines: 16 GiB required for the bootstrap machine 16 GiB required for each of the three control plane machines 16 GiB for each of the three compute machines Record the amount of Max free Memory for scheduling new virtual machines for use later on. Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager's REST API. From a virtual machine on this network, use curl to reach the RHV Manager's REST API: USD curl -k -u <username>@<profile>:<password> \ 1 https://<engine-fqdn>/ovirt-engine/api 2 1 For <username> , specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For <password> , specify the password for that user name. 2 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api 13.4.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. Ensure that the machines have persistent IP addresses and host names. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster. Firewall Configure your firewall so your cluster has access to required sites. See also: Red Hat Virtualization Manager firewall requirements Host firewall requirements DNS Configure infrastructure-provided DNS to allow the correct resolution of the main components and services. If you use only one load balancer, these DNS records can point to the same IP address. Create DNS records for api.<cluster_name>.<base_domain> (internal and external resolution) and api-int.<cluster_name>.<base_domain> (internal resolution) that point to the load balancer for the control plane machines. Create a DNS record for *.apps.<cluster_name>.<base_domain> that points to the load balancer for the Ingress router. For example, ports 443 and 80 of the compute machines. Table 13.11. All machines to all machines Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . TCP/UDP 30000 - 32767 Kubernetes node port Table 13.12. All machines to control plane Protocol Port Description TCP 6443 Kubernetes API Table 13.13. Control plane machines to control plane machines Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Network topology requirements The infrastructure that you provision for your cluster must meet the following network topology requirements. Important OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Load balancers Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 13.14. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Configure the following ports on both the front and back of the load balancers: Table 13.15. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTP traffic Tip If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. NTP configuration OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . 13.4.7. User-provisioned DNS requirements DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 13.16. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Bootstrap bootstrap.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Master hosts <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster. Worker hosts <worker><n>.<cluster_name>.<base_domain>. Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Tip You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records. The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another. Example 13.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF The following example BIND zone file shows sample PTR records for reverse name resolution. Example 13.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is "last octet" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF 13.4.8. Setting up the installation machine To run the binary openshift-install installation program and Ansible scripts, set up the RHV Manager or an Red Hat Enterprise Linux (RHEL) computer with network access to the RHV environment and the REST API on the Manager. Procedure Update or install Python3 and Ansible. For example: # dnf update python3 ansible Install the python3-ovirt-engine-sdk4 package to get the Python Software Development Kit. Install the ovirt.image-template Ansible role. On the RHV Manager and other Red Hat Enterprise Linux (RHEL) machines, this role is distributed as the ovirt-ansible-image-template package. For example, enter: # dnf install ovirt-ansible-image-template Install the ovirt.vm-infra Ansible role. On the RHV Manager and other RHEL machines, this role is distributed as the ovirt-ansible-vm-infra package. # dnf install ovirt-ansible-vm-infra Create an environment variable and assign an absolute or relative path to it. For example, enter: USD export ASSETS_DIR=./wrk Note The installation program uses this variable to create a directory where it saves important installation-related files. Later, the installation process reuses this variable to locate those asset files. Avoid deleting this assets directory; it is required for uninstalling the cluster. 13.4.9. Setting up the CA certificate for RHV Download the CA certificate from the Red Hat Virtualization (RHV) Manager and set it up on the installation machine. You can download the certificate from a webpage on the RHV Manager or by using a curl command. Later, you provide the certificate to the installation program. Procedure Use either of these two methods to download the CA certificate: Go to the Manager's webpage, https://<engine-fqdn>/ovirt-engine/ . Then, under Downloads , click the CA Certificate link. Run the following command: USD curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' -o /tmp/ca.pem 1 1 For <engine-fqdn> , specify the fully qualified domain name of the RHV Manager, such as rhv-env.virtlab.example.com . Configure the CA file to grant rootless user access to the Manager. Set the CA file permissions to have an octal value of 0644 (symbolic value: -rw-r- r-- ): USD sudo chmod 0644 /tmp/ca.pem For Linux, copy the CA certificate to the directory for server certificates. Use -p to preserve the permissions: USD sudo cp -p /tmp/ca.pem /etc/pki/ca-trust/source/anchors/ca.pem Add the certificate to the certificate manager for your operating system: For macOS, double-click the certificate file and use the Keychain Access utility to add the file to the System keychain. For Linux, update the CA trust: USD sudo update-ca-trust Note If you use your own certificate authority, make sure the system trusts it. Additional resources To learn more, see Authentication and Security in the RHV documentation. 13.4.10. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 13.4.11. Downloading the Ansible playbooks Download the Ansible playbooks for installing OpenShift Container Platform version 4.7 on RHV. Procedure On your installation machine, run the following commands: USD mkdir playbooks USD cd playbooks USD curl -s -L -X GET https://api.github.com/repos/openshift/installer/contents/upi/ovirt?ref=release-4.7 | grep 'download_url.*\.yml' | awk '{ print USD2 }' | sed -r 's/("|",)//g' | xargs -n 1 curl -O steps After you download these Ansible playbooks, you must also create the environment variable for the assets directory and customize the inventory.yml file before you create an installation configuration file by running the installation program. 13.4.12. The inventory.yml file You use the inventory.yml file to define and create elements of the OpenShift Container Platform cluster you are installing. This includes elements such as the Red Hat Enterprise Linux CoreOS (RHCOS) image, virtual machine templates, bootstrap machine, control plane nodes, and worker nodes. You also use inventory.yml to destroy the cluster. The following inventory.yml example shows you the parameters and their default values. The quantities and numbers in these default values meet the requirements for running a production OpenShift Container Platform cluster in a RHV environment. Example inventory.yml file --- all: vars: ovirt_cluster: "Default" ocp: assets_dir: "{{ lookup('env', 'ASSETS_DIR') }}" ovirt_config_path: "{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml" # --- # {op-system} section # --- rhcos: image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz" local_cmp_image_path: "/tmp/rhcos.qcow2.gz" local_image_path: "/tmp/rhcos.qcow2" # --- # Profiles section # --- control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: "{{ metadata.infraID }}-bootstrap" ocp_type: bootstrap profile: "{{ control_plane }}" type: server - name: "{{ metadata.infraID }}-master0" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master1" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master2" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-worker0" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker1" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker2" ocp_type: worker profile: "{{ compute }}" Important Enter values for parameters whose descriptions begin with "Enter." Otherwise, you can use the default value or replace it with a new value. General section ovirt_cluster : Enter the name of an existing RHV cluster in which to install the OpenShift Container Platform cluster. ocp.assets_dir : The path of a directory the openshift-install installation program creates to store the files that it generates. ocp.ovirt_config_path : The path of the ovirt-config.yaml file the installation program generates, for example, ./wrk/install-config.yaml . This file contains the credentials required to interact with the REST API of the Manager. Red Hat Enterprise Linux CoreOS (RHCOS) section image_url : Enter the URL of the RHCOS image you specified for download. local_cmp_image_path : The path of a local download directory for the compressed RHCOS image. local_image_path : The path of a local directory for the extracted RHCOS image. Profiles section This section consists of two profiles: control_plane : The profile of the bootstrap and control plane nodes. compute : The profile of workers nodes in the compute plane. These profiles have the following parameters. The default values of the parameters meet the minimum requirements for running a production cluster. You can increase or customize these values to meet your workload requirements. cluster : The value gets the cluster name from ovirt_cluster in the General Section. memory : The amount of memory, in GB, for the virtual machine. sockets : The number of sockets for the virtual machine. cores : The number of cores for the virtual machine. template : The name of the virtual machine template. If plan to install multiple clusters, and these clusters use templates that contain different specifications, prepend the template name with the ID of the cluster. operating_system : The type of guest operating system in the virtual machine. With oVirt/RHV version 4.4, this value must be rhcos_x64 so the value of Ignition script can be passed to the VM. type : Enter server as the type of the virtual machine. Important You must change the value of the type parameter from high_performance to server . disks : The disk specifications. The control_plane and compute nodes can have different storage domains. size : The minimum disk size. name : Enter the name of a disk connected to the target cluster in RHV. interface : Enter the interface type of the disk you specified. storage_domain : Enter the storage domain of the disk you specified. nics : Enter the name and network the virtual machines use. You can also specify the virtual network interface profile. By default, NICs obtain their MAC addresses from the oVirt/RHV MAC pool. Virtual machines section This final section, vms , defines the virtual machines you plan to create and deploy in the cluster. By default, it provides the minimum number of control plane and worker nodes for a production environment. vms contains three required elements: name : The name of the virtual machine. In this case, metadata.infraID prepends the virtual machine name with the infrastructure ID from the metadata.yml file. ocp_type : The role of the virtual machine in the OCP cluster. Possible values are bootstrap , master , worker . profile : The name of the profile from which each virtual machine inherits specifications. Possible values in this example are control_plane or compute . You can override the value a virtual machine inherits from its profile. To do this, you add the name of the profile attribute to the virtual machine in inventory.yml and assign it an overriding value. To see an example of this, examine the name: "{{ metadata.infraID }}-bootstrap" virtual machine in the preceding inventory.yml example: It has a type attribute whose value, server , overrides the value of the type attribute this virtual machine would otherwise inherit from the control_plane profile. Metadata variables For virtual machines, metadata.infraID prepends the name of the virtual machine with the infrastructure ID from the metadata.json file you create when you build the Ignition files. The playbooks use the following code to read infraID from the specific file located in the ocp.assets_dir . --- - name: include metadata.json vars include_vars: file: "{{ ocp.assets_dir }}/metadata.json" name: metadata ... 13.4.13. Specifying the RHCOS image settings Update the Red Hat Enterprise Linux CoreOS (RHCOS) image settings of the inventory.yml file. Later, when you run this file one of the playbooks, it downloads a compressed Red Hat Enterprise Linux CoreOS (RHCOS) image from the image_url URL to the local_cmp_image_path directory. The playbook then uncompresses the image to the local_image_path directory and uses it to create oVirt/RHV templates. Procedure Locate the RHCOS image download page for the version of OpenShift Container Platform you are installing, such as Index of /pub/openshift-v4/dependencies/rhcos/latest/latest . From that download page, copy the URL of an OpenStack qcow2 image, such as https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz . Edit the inventory.yml playbook you downloaded earlier. In it, paste the URL as the value for image_url . For example: rhcos: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz" 13.4.14. Creating the install config file You create an installation configuration file by running the installation program, openshift-install , and responding to its prompts with information you specified or gathered earlier. When you finish responding to the prompts, the installation program creates an initial version of the install-config.yaml file in the assets directory you specified earlier, for example, ./wrk/install-config.yaml The installation program also creates a file, USDHOME/.ovirt/ovirt-config.yaml , that contains all the connection parameters that are required to reach the Manager and use its REST API. NOTE: The installation process does not use values you supply for some parameters, such as Internal API virtual IP and Ingress virtual IP , because you have already configured them in your infrastructure DNS. It also uses the values you supply for parameters in inventory.yml , like the ones for oVirt cluster , oVirt storage , and oVirt network . And uses a script to remove or replace these same values from install-config.yaml with the previously mentioned virtual IPs . Procedure Run the installation program: USD openshift-install create install-config --dir USDASSETS_DIR Respond to the installation program's prompts with information about your system. Example output ? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********> For Internal API virtual IP and Ingress virtual IP , supply the IP addresses you specified when you configured the DNS service. Together, the values you enter for the oVirt cluster and Base Domain prompts form the FQDN portion of URLs for the REST API and any applications you create, such as https://api.ocp4.example.org:6443/ and https://console-openshift-console.apps.ocp4.example.org . You can get the pull secret from the Red Hat OpenShift Cluster Manager . 13.4.15. Sample install-config.yaml file for IBM Z 13.4.16. Sample install-config.yaml file for RHV You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{"auths": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading (SMT), or hyperthreading . By default, SMT is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml , ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IPs addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 12 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z infrastructure. RHV infrastructure. Warning Red Hat Virtualization does not currently support installation with user-provisioned infrastructure on the oVirt platform. Therefore, you must set the platform to none , allowing OpenShift Container Platform to identify each node as a bare-metal node and the cluster as a bare-metal cluster. This is the same as installing a cluster on any platform , and has the following limitations: There will be no cluster provider so you must manually add each machine and there will be no node scaling capabilities. The oVirt CSI driver will not be installed and there will be no CSI capabilities. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 14 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 13.4.16.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 13.4.17. Customizing install-config.yaml Here, you use three Python scripts to override some of the installation program's default behaviors: By default, the installation program uses the machine API to create nodes. To override this default behavior, you set the number of compute nodes to zero replicas. Later, you use Ansible playbooks to create the compute nodes. By default, the installation program sets the IP range of the machine network for nodes. To override this default behavior, you set the IP range to match your infrastructure. By default, the installation program sets the platform to ovirt . However, installing a cluster on user-provisioned infrastructure is more similar to installing a cluster on bare metal. Therefore, you delete the ovirt platform section from install-config.yaml and change the platform to none . Instead, you use inventory.yml to specify all of the required settings. Note These snippets work with Python 3 and Python 2. Procedure Set the number of compute nodes to zero replicas: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["compute"][0]["replicas"] = 0 open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Set the IP range of the machine network. For example, to set the range to 172.16.0.0/16 , enter: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["networking"]["machineNetwork"][0]["cidr"] = "172.16.0.0/16" open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Remove the ovirt section and change the platform to none : USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) platform = conf["platform"] del platform["ovirt"] platform["none"] = {} open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Warning Red Hat Virtualization does not currently support installation with user-provisioned infrastructure on the oVirt platform. Therefore, you must set the platform to none , allowing OpenShift Container Platform to identify each node as a bare-metal node and the cluster as a bare-metal cluster. This is the same as installing a cluster on any platform , and has the following limitations: There will be no cluster provider so you must manually add each machine and there will be no node scaling capabilities. The oVirt CSI driver will not be installed and there will be no CSI capabilities. 13.4.18. Generate manifest files Use the installation program to generate a set of manifest files in the assets directory. The command to generate the manifest files displays a warning message before it consumes the install-config.yaml file. If you plan to reuse the install-config.yaml file, create a backup copy of it before you back it up before you generate the manifest files. Procedure Optional: Create a backup copy of the install-config.yaml file: USD cp install-config.yaml install-config.yaml.backup Generate a set of manifests in your assets directory: USD openshift-install create manifests --dir USDASSETS_DIR This command displays the following messages. Example output INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings The command generates the following manifest files: Example output USD tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml steps Make control plane nodes non-schedulable. 13.4.19. Making control-plane nodes non-schedulable Because you are manually creating and deploying the control plane machines, you must configure a manifest file to make the control plane nodes non-schedulable. Procedure To make the control plane nodes non-schedulable, enter: USD python3 -c 'import os, yaml path = "%s/manifests/cluster-scheduler-02-config.yml" % os.environ["ASSETS_DIR"] data = yaml.safe_load(open(path)) data["spec"]["mastersSchedulable"] = False open(path, "w").write(yaml.dump(data, default_flow_style=False))' 13.4.20. Building the Ignition files To build the Ignition files from the manifest files you just generated and modified, you run the installation program. This action creates a Red Hat Enterprise Linux CoreOS (RHCOS) machine, initramfs , which fetches the Ignition files and performs the configurations needed to create a node. In addition to the Ignition files, the installation program generates the following: An auth directory that contains the admin credentials for connecting to the cluster with the oc and kubectl utilities. A metadata.json file that contains information such as the OpenShift Container Platform cluster name, cluster ID, and infrastructure ID for the current installation. The Ansible playbooks for this installation process use the value of infraID as a prefix for the virtual machines they create. This prevents naming conflicts when there are multiple installations in the same oVirt/RHV cluster. Note Certificates in Ignition configuration files expire after 24 hours. Complete the cluster installation and keep the cluster running in a non-degraded state for 24 hours so that the first certificate rotation can finish. Procedure To build the Ignition files, enter: USD openshift-install create ignition-configs --dir USDASSETS_DIR Example output USD tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign 13.4.21. Creating templates and virtual machines After confirming the variables in the inventory.yml , you run the first Ansible provisioning playbook, create-templates-and-vms.yml . This playbook uses the connection parameters for the RHV Manager from USDHOME/.ovirt/ovirt-config.yaml and reads metadata.json in the assets directory. If a local Red Hat Enterprise Linux CoreOS (RHCOS) image is not already present, the playbook downloads one from the URL you specified for image_url in inventory.yml . It extracts the image and uploads it to RHV to create templates. The playbook creates a template based on the control_plane and compute profiles in the inventory.yml file. If these profiles have different names, it creates two templates. When the playbook finishes, the virtual machines it creates are stopped. You can get information from them to help configure other infrastructure elements. For example, you can get the virtual machines' MAC addresses to configure DHCP to assign permanent IP addresses to the virtual machines. Procedure In inventory.yml , under the control_plane and compute variables, change both instances of type: high_performance to type: server . Optional: If you plan to perform multiple installations to the same cluster, create different templates for each OCP installation. In the inventory.yml file, prepend the value of template with infraID . For example: control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: "{{ metadata.infraID }}-rhcos_tpl" operating_system: "rhcos_x64" ... Create the templates and virtual machines: USD ansible-playbook -i inventory.yml create-templates-and-vms.yml 13.4.22. Creating the bootstrap machine You create a bootstrap machine by running the bootstrap.yml playbook. This playbook starts the bootstrap virtual machine, and passes it the bootstrap.ign Ignition file from the assets directory. The bootstrap node configures itself so it can serve Ignition files to the control plane nodes. To monitor the bootstrap process, you use the console in the RHV Administration Portal or connect to the virtual machine by using SSH. Procedure Create the bootstrap machine: USD ansible-playbook -i inventory.yml bootstrap.yml Connect to the bootstrap machine using a console in the Administration Portal or SSH. Replace <bootstrap_ip> with the bootstrap node IP address. To use SSH, enter: USD ssh core@<boostrap.ip> Collect bootkube.service journald unit logs for the release image service from the bootstrap node: [core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. 13.4.23. Creating the control plane nodes You create the control plane nodes by running the masters.yml playbook. This playbook passes the master.ign Ignition file to each of the virtual machines. The Ignition file contains a directive for the control plane node to get the Ignition from a URL such as https://api-int.ocp4.example.org:22623/config/master . The port number in this URL is managed by the load balancer, and is accessible only inside the cluster. Procedure Create the control plane nodes: USD ansible-playbook -i inventory.yml masters.yml While the playbook creates your control plane, monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR Example output INFO API v1.18.3+b74c5ed up INFO Waiting up to 40m0s for bootstrapping to complete... When all the pods on the control plane nodes and etcd are up and running, the installation program displays the following output. Example output INFO It is now safe to remove the bootstrap resources 13.4.24. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 13.4.25. Removing the bootstrap machine After the wait-for command shows that the bootstrap process is complete, you must remove the bootstrap virtual machine to free up compute, memory, and storage resources. Also, remove settings for the bootstrap machine from the load balancer directives. Procedure To remove the bootstrap machine from the cluster, enter: USD ansible-playbook -i inventory.yml retire-bootstrap.yml Remove settings for the bootstrap machine from the load balancer directives. 13.4.26. Creating the worker nodes and completing the installation Creating worker nodes is similar to creating control plane nodes. However, worker nodes workers do not automatically join the cluster. To add them to the cluster, you review and approve the workers' pending CSRs (Certificate Signing Requests). After approving the first requests, you continue approving CSR until all of the worker nodes are approved. When you complete this process, the worker nodes become Ready and can have pods scheduled to run on them. Finally, monitor the command line to see when the installation process completes. Procedure Create the worker nodes: USD ansible-playbook -i inventory.yml workers.yml To list all of the CSRs, enter: USD oc get csr -A Eventually, this command displays one CSR per node. For example: Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending To filter the list and see only pending CSRs, enter: USD watch "oc get csr -A | grep pending -i" This command refreshes the output every two seconds and displays only pending CSRs. For example: Example output Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending Inspect each pending request. For example: Example output USD oc describe csr csr-m724n Example output Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none> If the CSR information is correct, approve the request: USD oc adm certificate approve csr-m724n Wait for the installation process to finish: USD openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug When the installation completes, the command line displays the URL of the OpenShift Container Platform web console and the administrator user name and password. 13.4.27. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 13.4.28. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Global Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 13.5. Uninstalling a cluster on RHV You can remove an OpenShift Container Platform cluster from Red Hat Virtualization (RHV). 13.5.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 13.5.2. Removing a cluster that uses user-provisioned infrastructure When you are finished using the cluster, you can remove a cluster that uses user-provisioned infrastructure from your cloud. Prerequisites Have the original playbook files, assets directory and files, and USDASSETS_DIR environment variable that you used to you install the cluster. Typically, you can achieve this by using the same computer you used when you installed the cluster. Procedure To remove the cluster, enter: USD ansible-playbook -i inventory.yml \ retire-bootstrap.yml \ retire-masters.yml \ retire-workers.yml Remove any configurations you added to DNS, load balancers, and any other infrastructure for this cluster.
[ "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "arp 10.35.1.19", "10.35.1.19 (10.35.1.19) -- no entry", "api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2", "api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "rhv-env.virtlab.example.com:443", "<username>@<profile> 1", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvzf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "console-openshift-console.apps.<clustername>.<basedomain> 1", "console-openshift-console.apps.my-cluster.virtlab.example.com", "systemctl restart kubelet", "oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics", "oc login -u kubeadmin -p *** <apiurl>", "./openshift-install wait-for bootstrap-complete", "./openshift-install destroy bootstrap", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "arp 10.35.1.19", "10.35.1.19 (10.35.1.19) -- no entry", "api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2", "api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "https://<engine-fqdn>/ovirt-engine/api 1", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "<username>@<profile> 1", "ocpadmin@internal", "[ovirt_ca_bundle]: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <INTERMEDIATE_CA> -----END CERTIFICATE-----", "[additionalTrustBundle]: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <INTERMEDIATE_CA> -----END CERTIFICATE-----", "./openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: my-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: ovirt: api_vip: 10.46.8.230 ingress_vip: 192.168.1.5 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 publish: External pullSecret: '{\"auths\": ...}' sshKey: ssh-ed12345 AAAA", "apiVersion: v1 baseDomain: example.com metadata: name: test-cluster platform: ovirt: api_vip: 10.46.8.230 ingress_vip: 10.46.8.232 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed12345 AAAA", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: ovirt: cpu: cores: 4 sockets: 2 memoryMB: 65536 osDisk: sizeGB: 100 vmType: server replicas: 3 compute: - name: worker platform: ovirt: cpu: cores: 4 sockets: 4 memoryMB: 65536 osDisk: sizeGB: 200 vmType: server replicas: 5 metadata: name: test-cluster platform: ovirt: api_vip: 10.46.8.230 ingress_vip: 10.46.8.232 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvzf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "console-openshift-console.apps.<clustername>.<basedomain> 1", "console-openshift-console.apps.my-cluster.virtlab.example.com", "systemctl restart kubelet", "oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics", "oc login -u kubeadmin -p *** <apiurl>", "./openshift-install wait-for bootstrap-complete", "./openshift-install destroy bootstrap", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "dnf update python3 ansible", "dnf install ovirt-ansible-image-template", "dnf install ovirt-ansible-vm-infra", "export ASSETS_DIR=./wrk", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar xvf openshift-install-linux.tar.gz", "mkdir playbooks", "cd playbooks", "curl -s -L -X GET https://api.github.com/repos/openshift/installer/contents/upi/ovirt?ref=release-4.7 | grep 'download_url.*\\.yml' | awk '{ print USD2 }' | sed -r 's/(\"|\",)//g' | xargs -n 1 curl -O", "--- all: vars: ovirt_cluster: \"Default\" ocp: assets_dir: \"{{ lookup('env', 'ASSETS_DIR') }}\" ovirt_config_path: \"{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml\" # --- # {op-system} section # --- rhcos: image_url: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz\" local_cmp_image_path: \"/tmp/rhcos.qcow2.gz\" local_image_path: \"/tmp/rhcos.qcow2\" # --- # Profiles section # --- control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: \"{{ metadata.infraID }}-bootstrap\" ocp_type: bootstrap profile: \"{{ control_plane }}\" type: server - name: \"{{ metadata.infraID }}-master0\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master1\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master2\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-worker0\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker1\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker2\" ocp_type: worker profile: \"{{ compute }}\"", "--- - name: include metadata.json vars include_vars: file: \"{{ ocp.assets_dir }}/metadata.json\" name: metadata", "rhcos: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz\"", "openshift-install create install-config --dir USDASSETS_DIR", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"compute\"][0][\"replicas\"] = 0 open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"networking\"][\"machineNetwork\"][0][\"cidr\"] = \"172.16.0.0/16\" open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) platform = conf[\"platform\"] del platform[\"ovirt\"] platform[\"none\"] = {} open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "cp install-config.yaml install-config.yaml.backup", "openshift-install create manifests --dir USDASSETS_DIR", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings", "tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml", "python3 -c 'import os, yaml path = \"%s/manifests/cluster-scheduler-02-config.yml\" % os.environ[\"ASSETS_DIR\"] data = yaml.safe_load(open(path)) data[\"spec\"][\"mastersSchedulable\"] = False open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openshift-install create ignition-configs --dir USDASSETS_DIR", "tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: \"{{ metadata.infraID }}-rhcos_tpl\" operating_system: \"rhcos_x64\"", "ansible-playbook -i inventory.yml create-templates-and-vms.yml", "ansible-playbook -i inventory.yml bootstrap.yml", "ssh core@<boostrap.ip>", "[core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service", "ansible-playbook -i inventory.yml masters.yml", "openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR", "INFO API v1.18.3+b74c5ed up INFO Waiting up to 40m0s for bootstrapping to complete", "INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "ansible-playbook -i inventory.yml retire-bootstrap.yml", "ansible-playbook -i inventory.yml workers.yml", "oc get csr -A", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "watch \"oc get csr -A | grep pending -i\"", "Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc describe csr csr-m724n", "Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>", "oc adm certificate approve csr-m724n", "openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is \"last octet\" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF", "dnf update python3 ansible", "dnf install ovirt-ansible-image-template", "dnf install ovirt-ansible-vm-infra", "export ASSETS_DIR=./wrk", "curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' -o /tmp/ca.pem 1", "sudo chmod 0644 /tmp/ca.pem", "sudo cp -p /tmp/ca.pem /etc/pki/ca-trust/source/anchors/ca.pem", "sudo update-ca-trust", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir playbooks", "cd playbooks", "curl -s -L -X GET https://api.github.com/repos/openshift/installer/contents/upi/ovirt?ref=release-4.7 | grep 'download_url.*\\.yml' | awk '{ print USD2 }' | sed -r 's/(\"|\",)//g' | xargs -n 1 curl -O", "--- all: vars: ovirt_cluster: \"Default\" ocp: assets_dir: \"{{ lookup('env', 'ASSETS_DIR') }}\" ovirt_config_path: \"{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml\" # --- # {op-system} section # --- rhcos: image_url: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz\" local_cmp_image_path: \"/tmp/rhcos.qcow2.gz\" local_image_path: \"/tmp/rhcos.qcow2\" # --- # Profiles section # --- control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: \"{{ metadata.infraID }}-bootstrap\" ocp_type: bootstrap profile: \"{{ control_plane }}\" type: server - name: \"{{ metadata.infraID }}-master0\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master1\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master2\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-worker0\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker1\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker2\" ocp_type: worker profile: \"{{ compute }}\"", "--- - name: include metadata.json vars include_vars: file: \"{{ ocp.assets_dir }}/metadata.json\" name: metadata", "rhcos: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-openstack.x86_64.qcow2.gz\"", "openshift-install create install-config --dir USDASSETS_DIR", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"compute\"][0][\"replicas\"] = 0 open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"networking\"][\"machineNetwork\"][0][\"cidr\"] = \"172.16.0.0/16\" open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) platform = conf[\"platform\"] del platform[\"ovirt\"] platform[\"none\"] = {} open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "cp install-config.yaml install-config.yaml.backup", "openshift-install create manifests --dir USDASSETS_DIR", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings", "tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml", "python3 -c 'import os, yaml path = \"%s/manifests/cluster-scheduler-02-config.yml\" % os.environ[\"ASSETS_DIR\"] data = yaml.safe_load(open(path)) data[\"spec\"][\"mastersSchedulable\"] = False open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openshift-install create ignition-configs --dir USDASSETS_DIR", "tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: \"{{ metadata.infraID }}-rhcos_tpl\" operating_system: \"rhcos_x64\"", "ansible-playbook -i inventory.yml create-templates-and-vms.yml", "ansible-playbook -i inventory.yml bootstrap.yml", "ssh core@<boostrap.ip>", "[core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service", "ansible-playbook -i inventory.yml masters.yml", "openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR", "INFO API v1.18.3+b74c5ed up INFO Waiting up to 40m0s for bootstrapping to complete", "INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "ansible-playbook -i inventory.yml retire-bootstrap.yml", "ansible-playbook -i inventory.yml workers.yml", "oc get csr -A", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "watch \"oc get csr -A | grep pending -i\"", "Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc describe csr csr-m724n", "Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>", "oc adm certificate approve csr-m724n", "openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ansible-playbook -i inventory.yml retire-bootstrap.yml retire-masters.yml retire-workers.yml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/installing/installing-on-rhv
1.5. Security Updates
1.5. Security Updates As security vulnerabilities are discovered, the affected software must be updated in order to limit any potential security risks. If the software is part of a package within a Red Hat Enterprise Linux distribution that is currently supported, Red Hat is committed to releasing updated packages that fix the vulnerability as soon as is possible. Often, announcements about a given security exploit are accompanied with a patch (or source code that fixes the problem). This patch is then applied to the Red Hat Enterprise Linux package and tested and released as an errata update. However, if an announcement does not include a patch, a developer first works with the maintainer of the software to fix the problem. Once the problem is fixed, the package is tested and released as an errata update. If an errata update is released for software used on your system, it is highly recommended that you update the affected packages as soon as possible to minimize the amount of time the system is potentially vulnerable. 1.5.1. Updating Packages When updating software on a system, it is important to download the update from a trusted source. An attacker can easily rebuild a package with the same version number as the one that is supposed to fix the problem but with a different security exploit and release it on the Internet. If this happens, using security measures such as verifying files against the original RPM does not detect the exploit. Thus, it is very important to only download RPMs from trusted sources, such as from Red Hat and to check the signature of the package to verify its integrity. Note Red Hat Enterprise Linux includes a convenient panel icon that displays visible alerts when there is an update available.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-Security_Updates
7.255. tog-pegasus
7.255. tog-pegasus 7.255.1. RHBA-2013:0418 - tog-pegasus bug fix and enhancement update Updated tog-pegasus packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. OpenPegasus Web-Based Enterprise Management (WBEM) Services for Linux enables management solutions that deliver increased control of enterprise resources. WBEM is a platform and resource independent of Distributed Management Task Force (DMTF) standard that defines a common information model and communication protocol for monitoring and controlling resources from diverse sources. Note The tog-pegasus package has been upgraded to upstream version 2.12.0, which provides a number of bug fixes and enhancements over the version. (BZ#739118, BZ#825471) Bug Fixes BZ#812892 Previously, non-array properties of CMPI instances were not checked for NULL values in the cimserver daemon (OpenPegasus CIM server). This led to an unexpected termination of cimserver. This update provides an upstream patch for cimserver. Now, cimserver correctly returns instances with non-array properties including those containing NULL values. BZ#869664 Prior to this update, all connections to cimserver were considered local host. Consequently, cimsever could not discern between local and remote connections and between granted and denied access. A patch has been provided to fix this bug. Now, cimserver is again capable of recognizing whether, firstly, connections are local or remote and, secondly, whether the access per user will be granted or denied. Enhancement BZ#716474 The cimserver daemon uses all of well-known ports based on CIM/WBEM technology for network communication. This update provides the user with an option to configure, which interfaces have to be used and to restrict cimserver to listen only on selected network interfaces. Users of tog-pegasus are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/tog-pegasus
Chapter 11. Maintaining Satellite Server
Chapter 11. Maintaining Satellite Server This chapter provides information on how to maintain a Satellite Server, including information on how to work with audit records, how to clean unused tasks, and how to recover Pulp from a full disk. 11.1. Deleting Audit Records Audit records are created automatically in Satellite. You can use the foreman-rake audits:expire command to remove audits at any time. You can also use a cron job to schedule audit record deletions at the set interval that you want. By default, using the foreman-rake audits:expire command removes audit records that are older than 90 days. You can specify the number of days to keep the audit records by adding the days option and add the number of days. For example, if you want to delete audit records that are older than seven days, enter the following command: 11.2. Anonymizing Audit Records You can use the foreman-rake audits:anonymize command to remove any user account or IP information while maintaining the audit records in the database. You can also use a cron job to schedule anonymizing the audit records at the set interval that you want. By default, using the foreman-rake audits:anonymize command anonymizes audit records that are older than 90 days. You can specify the number of days to keep the audit records by adding the days option and add the number of days. For example, if you want to anonymize audit records that are older than seven days, enter the following command: 11.3. Deleting Report Records Report records are created automatically in Satellite. You can use the foreman-rake reports:expire command to remove reports at any time. You can also use a cron job to schedule report record deletions at the set interval that you want. By default, using the foreman-rake reports:expire command removes report records that are older than 90 days. You can specify the number of days to keep the report records by adding the days option and add the number of days. For example, if you want to delete report records that are older than seven days, enter the following command: 11.4. Configuring the Cleaning Unused Tasks Feature Satellite performs regular cleaning to reduce disc space in the database and limit the rate of disk growth. As a result, Satellite backup completes faster and overall performance is higher. By default, Satellite executes a cron job that cleans tasks every day at 19:45. Satellite removes the following tasks during the cleaning: Tasks that have run successfully and are older than thirty days All tasks that are older than a year You can configure the cleaning unused tasks feature using these options: To configure the time at which Satellite runs the cron job, set the --foreman-plugin-tasks-cron-line parameter to the time you want in cron format. For example, to schedule the cron job to run every day at 15:00, enter the following command: To configure the period after which Satellite deletes the tasks, edit the :rules: section in the /etc/foreman/plugins/foreman-tasks.yaml file. To disable regular task cleanup on Satellite, enter the following command: To reenable regular task cleanup on Satellite, enter the following command: 11.5. Deleting Task Records Task records are created automatically in Satellite. You can use the foreman-rake foreman_tasks:cleanup command to remove tasks at any time. You can also use a cron job to schedule Task record deletions at the set interval that you want. For example, if you want to delete task records from successful repository synchronizations, enter the following command: 11.6. Deleting a Task by ID You can delete tasks by ID, for example if you have submitted confidential data by mistake. Procedure Connect to your Satellite Server using SSH: Optional: View the task: Delete the task: Optional: Ensure the task has been removed from Satellite Server: Note that because the task is deleted, this command returns a non-zero exit code. 11.7. Recovering from a Full Disk The following procedure describes how to resolve the situation when a logical volume (LV) with the Pulp database on it has no free space. Procedure Let running Pulp tasks finish but do not trigger any new ones as they can fail due to the full disk. Ensure that the LV with the /var/lib/pulp directory on it has sufficient free space. Here are some ways to achieve that: Remove orphaned content: This is run weekly so it will not free much space. Change the download policy from Immediate to On Demand for as many repositories as possible and remove already downloaded packages. See the Red Hat Knowledgebase solution How to change syncing policy for Repositories on Satellite from "Immediate" to "On-Demand" on the Red Hat Customer Portal for instructions. Grow the file system on the LV with the /var/lib/pulp directory on it. For more information, see Growing a File System on a Logical Volume in the Red Hat Enterprise Linux 7 Logical Volume Manager Administration Guide . Note If you use an untypical file system (other than for example ext3, ext4, or xfs), you might need to unmount the file system so that it is not in use. In that case, complete the following steps: Stop Satellite services: Grow the file system on the LV. Start Satellite services: If some Pulp tasks failed due to the full disk, run them again. 11.8. Managing Packages on the Base Operating System of Satellite Server or Capsule Server To install and update packages on the Satellite Server or Capsule Server base operating system, you must enter the satellite-maintain packages command. Satellite prevents users from installing and updating packages with yum because yum might also update the packages related to Satellite Server or Capsule Server and result in system inconsistency. Important The satellite-maintain packages command restarts some services on the operating system where you run it because it runs the satellite-installer command after installing packages. Procedure To install packages on Satellite Server or Capsule Server, enter the following command: To update specific packages on Satellite Server or Capsule Server, enter the following command: To update all packages on Satellite Server or Capsule Server, enter the following command: Using yum to Check for Package Updates If you want to check for updates using yum , enter the command to install and update packages manually and then you can use yum to check for updates: Updating packages individually can lead to package inconsistencies in Satellite Server or Capsule Server. For more information about updating packages in Satellite Server, see Updating Satellite Server . Enabling yum for Satellite Server or Capsule Server Package Management If you want to install and update packages on your system using yum directly and control the stability of the system yourself, enter the following command: Restoring Package Management to the Default Settings If you want to restore the default settings and enable Satellite Server or Capsule Server to prevent users from installing and updating packages with yum and ensure the stability of the system, enter the following command: 11.9. Reclaiming PostgreSQL Space The PostgreSQL database can use a large amount of disk space especially in heavily loaded deployments. Use this procedure to reclaim some of this disk space on Satellite. Procedure Stop all services, except for the postgresql service: Switch to the postgres user and reclaim space on the database: Start the other services when the vacuum completes: 11.10. Reclaiming Space From On Demand Repositories If you set the download policy to on demand, Satellite downloads packages only when the clients request them. You can clean up these packages to reclaim space. For a single repository In the Satellite web UI, navigate to Content > Products . Select a product. On the Repositories tab, click the repository name. From the Select Actions list, select Reclaim Space . For multiple repositories In the Satellite web UI, navigate to Content > Products . Select the product name. On the Repositories tab, select the checkbox of the repositories. Click Reclaim Space at the top right corner. For Capsules In the Satellite web UI, navigate to Infrastructure > Capsules . Select the Capsule Server. Click Reclaim space .
[ "foreman-rake audits:expire days=7", "foreman-rake audits:anonymize days=7", "foreman-rake reports:expire days=7", "satellite-installer --foreman-plugin-tasks-cron-line \"00 15 * * *\"", "satellite-installer --foreman-plugin-tasks-automatic-cleanup false", "satellite-installer --foreman-plugin-tasks-automatic-cleanup true", "foreman-rake foreman_tasks:cleanup TASK_SEARCH='label = Actions::Katello::Repository::Sync' STATES='stopped'", "ssh [email protected]", "hammer task info --id My_Task_ID", "foreman-rake foreman_tasks:cleanup TASK_SEARCH=\"id= My_Task_ID \"", "hammer task info --id My_Task_ID", "foreman-rake katello:delete_orphaned_content RAILS_ENV=production", "satellite-maintain service stop", "satellite-maintain service start", "satellite-maintain packages install package_1 package_2", "satellite-maintain packages update package_1 package_2", "satellite-maintain packages update", "satellite-maintain packages unlock yum check update satellite-maintain packages lock", "satellite-maintain packages unlock", "satellite-maintain packages lock", "satellite-maintain service stop --exclude postgresql", "su - postgres -c 'vacuumdb --full --all'", "satellite-maintain service start" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/administering_red_hat_satellite/Maintaining_Server_admin
Chapter 24. Updated Drivers
Chapter 24. Updated Drivers Storage Driver Updates The Microsemi Smart Family Controller driver (smartpqi.ko.xz) has been updated to version 1.1.4-115. The HP Smart Array Controller driver (hpsa.ko.xz) has been updated to version 3.4.20-125-RH1. The Emulex LightPulse Fibre Channel SCSI driver (lpfc.ko.xz) has been updated to version 0:12.0.0.5. The Avago MegaRAID SAS driver (megaraid_sas.ko.xz) has been updated to version 07.705.02.00-rh1. The Dell PERC2, 2/Si, 3/Si, 3/Di, Adaptec Advanced Raid Products, HP NetRAID-4M, IBM ServeRAID & ICP SCS driver (aacraid.ko.xz) has been updated to version 1.2.1[50877]-custom. The QLogic FastLinQ 4xxxx iSCSI Module driver (qedi.ko.xz) has been updated to version 8.33.0.20. The QLogic Fibre Channel HBA driver (qla2xxx.ko.xz) has been updated to version 10.00.00.06.07.6-k. The QLogic QEDF 25/40/50/100Gb FCoE driver (qedf.ko.x) has been updated to version 8.33.0.20. The LSI MPT Fusion SAS 3.0 Device driver (mpt3sas.ko.xz) has been updated to version 16.100.01.00. The LSI MPT Fusion SAS 2.0 Device driver (mpt2sas.ko.xz) has been updated to version 20.103.01.00. Network Driver Updates The Realtek RTL8152/RTL8153 Based USB Ethernet Adapters driver (r8152.ko.xz) has been updated to version v1.09.9. The VMware vmxnet3 virtual NIC driver (vmxnet3.ko.xz) has been updated to version 1.4.14.0-k. The Intel(R) Ethernet Connection XL710 Network driver (i40e.ko.xz) has been updated to version 2.3.2-k. The Intel(R) 10 Gigabit Virtual Function Network driver (ixgbevf.ko.xz) has been updated to version 4.1.0-k-rh7.6. The Intel(R) 10 Gigabit PCI Express Network driver (ixgbe.ko.xz) has been updated to version 5.1.0-k-rh7.6. The Intel(R) XL710 X710 Virtual Function Network driver (i40evf.ko.xz) has been updated to version 3.2.2-k. The Intel(R) Ethernet Switch Host Interface driver (fm10k.ko.xz) has been updated to version 0.22.1-k. The Broadcom BCM573xx network driver (bnxt_en.ko.xz) has been updated to version 1.9.1. The Cavium LiquidIO Intelligent Server Adapter driver (liquidio.ko.xz) has been updated to version 1.7.2. The Cavium LiquidIO Intelligent Server Adapter Virtual Function driver (liquidio_vf.ko.xz) has been updated to version 1.7.2. The Elastic Network Adapter (ENA) driver (ena.ko.xz) has been updated to version 1.5.0K. The aQuantia Corporation Network driver (atlantic.ko.xz) has been updated to version 2.0.2.1-kern. The QLogic FastLinQ 4xxxx Ethernet driver (qede.ko.xz) has been updated to version 8.33.0.20. The QLogic FastLinQ 4xxxx Core Module driver (qed.ko.xz) has been updated to version 8.33.0.20. The Cisco VIC Ethernet NIC driver (enic.ko.xz) has been updated to version 2.3.0.53. Graphics Driver and Miscellaneous Driver Updates The VMware Memory Control (Balloon) driver (vmw_balloon.ko.xz) has been updated to version 1.4.1.0-k. The HP watchdog driver (hpwdt.ko.xz) has been updated to version 1.4.0-RH1k. The standalone drm driver for the VMware SVGA device (vmwgfx.ko.xz) has been updated to version 2.14.1.0.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/updated_drivers
Chapter 11. VolumeSnapshotClass [snapshot.storage.k8s.io/v1]
Chapter 11. VolumeSnapshotClass [snapshot.storage.k8s.io/v1] Description VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced Type object Required deletionPolicy driver 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources deletionPolicy string deletionPolicy determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. Required. driver string driver is the name of the storage driver that handles this VolumeSnapshotClass. Required. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata parameters object (string) parameters is a key-value map with storage driver specific parameters for creating snapshots. These values are opaque to Kubernetes. 11.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses DELETE : delete collection of VolumeSnapshotClass GET : list objects of kind VolumeSnapshotClass POST : create a VolumeSnapshotClass /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses/{name} DELETE : delete a VolumeSnapshotClass GET : read the specified VolumeSnapshotClass PATCH : partially update the specified VolumeSnapshotClass PUT : replace the specified VolumeSnapshotClass 11.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses Table 11.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of VolumeSnapshotClass Table 11.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshotClass Table 11.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.5. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClassList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshotClass Table 11.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.7. Body parameters Parameter Type Description body VolumeSnapshotClass schema Table 11.8. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 201 - Created VolumeSnapshotClass schema 202 - Accepted VolumeSnapshotClass schema 401 - Unauthorized Empty 11.2.2. /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses/{name} Table 11.9. Global path parameters Parameter Type Description name string name of the VolumeSnapshotClass Table 11.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a VolumeSnapshotClass Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 11.12. Body parameters Parameter Type Description body DeleteOptions schema Table 11.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshotClass Table 11.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.15. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshotClass Table 11.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.17. Body parameters Parameter Type Description body Patch schema Table 11.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshotClass Table 11.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.20. Body parameters Parameter Type Description body VolumeSnapshotClass schema Table 11.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 201 - Created VolumeSnapshotClass schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/storage_apis/volumesnapshotclass-snapshot-storage-k8s-io-v1
B.36. kdelibs
B.36. kdelibs B.36.1. RHSA-2011:0464 - Moderate: kdelibs security update Updated kdelibs packages that fix two security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The kdelibs packages provide libraries for the K Desktop Environment (KDE). CVE-2011-1168 A cross-site scripting (XSS) flaw was found in the way KHTML, the HTML layout engine used by KDE applications such as the Konqueror web browser, displayed certain error pages. A remote attacker could use this flaw to perform a cross-site scripting attack against victims by tricking them into visiting a specially-crafted URL. CVE-2011-1094 A flaw was found in the way kdelibs checked the user specified hostname against the name in the server's SSL certificate. A man-in-the-middle attacker could use this flaw to trick an application using kdelibs into mistakenly accepting a certificate as if it was valid for the host, if that certificate was issued for an IP address to which the user specified hostname was resolved to. Note Note that as part of the fix, this update also introduces stricter handling for wildcards used in servers' SSL certificates. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The desktop must be restarted (log out, then log back in) for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/kdelibs
Chapter 2. Differences from upstream OpenJDK 11
Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.19/rn-openjdk-diff-from-upstream
33.2. Systems Registered with Red Hat Satellite
33.2. Systems Registered with Red Hat Satellite For a Satellite registration on the server, locate the system in the Systems tab and delete the appropriate profile. For additional information, see Red Hat Satellite Quick Start Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-subscription-management-unregistering-satellite
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 1-43 Fri Feb 7 2020 Jan Fiala Async release with an update of the Compliance and Vulnerability Scanning chapter. Revision 1-42 Fri Aug 9 2019 Mirek Jahoda Version for 7.7 GA publication. Revision 1-41 Sat Oct 20 2018 Mirek Jahoda Version for 7.6 GA publication. Revision 1-32 Wed Apr 4 2018 Mirek Jahoda Version for 7.5 GA publication. Revision 1-30 Thu Jul 27 2017 Mirek Jahoda Version for 7.4 GA publication. Revision 1-24 Mon Feb 6 2017 Mirek Jahoda Async release with misc. updates, especially in the firewalld section. Revision 1-23 Tue Nov 1 2016 Mirek Jahoda Version for 7.3 GA publication. Revision 1-19 Mon Jul 18 2016 Mirek Jahoda The Smart Cards section added. Revision 1-18 Mon Jun 27 2016 Mirek Jahoda The OpenSCAP-daemon and Atomic Scan section added. Revision 1-17 Fri Jun 3 2016 Mirek Jahoda Async release with misc. updates. Revision 1-16 Tue Jan 5 2016 Robert Kratky Post 7.2 GA fixes. Revision 1-15 Tue Nov 10 2015 Robert Kratky Version for 7.2 GA release. Revision 1-14.18 Mon Nov 09 2015 Robert Kratky Async release with misc. updates. Revision 1-14.17 Wed Feb 18 2015 Robert Kratky Version for 7.1 GA release. Revision 1-14.15 Fri Dec 06 2014 Robert Kratky Update to sort order on the Red Hat Customer Portal. Revision 1-14.13 Thu Nov 27 2014 Robert Kratky Updates reflecting the POODLE vuln. Revision 1-14.12 Tue Jun 03 2014 Tomas Capek Version for 7.0 GA release.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/app-security_guide-revision_history
4.7. Eaton Network Power Switch
4.7. Eaton Network Power Switch Table 4.8, "Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later)" lists the fence device parameters used by fence_eaton_snmp , the fence agent for the Eaton over SNMP network power switch. Table 4.8. Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later) luci Field cluster.conf Attribute Description Name name A name for the Eaton network power switch connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. This parameter is always required. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Figure 4.7, "Eaton Network Power Switch" shows the configuration screen for adding an Eaton Network Power Switch fence device. Figure 4.7. Eaton Network Power Switch The following command creates a fence device instance for an Eaton Network Power Switch device: The following is the cluster.conf entry for the fence_eaton_snmp device:
[ "ccs -f cluster.conf --addfencedev eatontest agent=fence_eaton_snmp ipaddr=192.168.0.1 login=root passwd=password123 power_wait=60 snmp_priv_passwd=eatonpassword123 udpport=161", "<fencedevices> <fencedevice agent=\"fence_eaton_snmp\" community=\"private\" ipaddr=\"eatonhost\" login=\"eatonlogin\" name=\"eatontest\" passwd=\"password123\" passwd_script=\"eatonpwscr\" power_wait=\"3333\" snmp_priv_passwd=\"eatonprivprotpass\" snmp_priv_passwd_script=\"eatonprivprotpwscr\" udpport=\"161\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-eaton-CA
8.264. xz
8.264. xz 8.264.1. RHBA-2014:0769 - xz bug fix update Updated xz packages that fix several bugs are now available for Red Hat Enterprise Linux 6. XZ Utils is an integrated collection of user-space file compression utilities based on the Lempel-Ziv-Markov chain algorithm (LZMA), which performs lossless data compression. The algorithm provides a high compression ratio while keeping the decompression time short. Bug Fixes BZ# 850898 Previously, the '-h' option of the xzgrep command was not included. As a consequence, the matching lines in the output were prefixed with the corresponding file names. This update adds the '-h' option, and running the 'xzgrep -h' command now suppresses the file name on output as expected. BZ# 863024 Prior to this update, running the 'xzgrep -l' command did not work correctly because the source code did not handle the '-q' option appropriately. As a consequence, an error message was displayed. A patch has been applied to handle the 'grep -q' command in the source code correctly. As a result, running the 'xzgrep -l' suppresses normal output and prints file names with a matching line as expected. BZ# 988703 The xzfgrep command is supposed to act as an alias for the 'xzgrep -F' command. Previously, this alias behavior was not set correctly, and the patterns were processed as regular expressions and not as fixed strings. As a consequence, running the xzfgrep command produced no output. With this update, xzfgrep command works as an alias of the 'xzgrep -F' command, and running the xzfgrep command returns correct output. BZ# 1108085 Previously, the xzgrep command returned exit status 1 when contents of one or more files did not match the requested pattern. With this update, xzgrep returns exit status 0 if there is at least one match of the pattern, which makes the xzgrep behavior consistent with the default grep command behavior. Users of xz are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/xz
Chapter 8. Identity management with LDAP
Chapter 8. Identity management with LDAP If you have configured the Identity service (keystone) to authenticate against or to retrieve identity information from an LDAP server, you can secure LDAP communication for the Identity service by using a CA certificate. You must obtain the CA certificate from Active Directory, convert the CA certificate file into Privacy Enhanced Mail (PEM) file format, and configure secure LDAP communication for the Identity service. You can perform this configuration with one of three methods, depending on where and how the CA trust is configured. 8.1. Obtaining the CA certificate from Active Directory Use the following example code to query Active Directory to obtain the CA certificate. The CA_NAME is the name of the certificate and the rest of the parameters can be changed according to your configuration: 8.2. Converting the CA certificate into PEM format Before you can configure LDAP in the Identity service (keystone), you must convert the CA certificate to PEM format. Procedure Create a file called /path/cacert.pem and include the contents of the LDAP query - that obtained the CA certificate from Active Directory, within the header and footer: For troubleshooting, you can execute the following query to check if LDAP is working, and to ensure that the PEM certificate file was created correctly. The query should return a result similar to: Run the following command to get a CA certificate if it was hosted by a web server. USDHOST=redhat.com USDPORT=443 8.3. Configuring secure LDAP communication for the Identity service Use one of the following methods to configure LDAP for the Identity service (keystone). Method 1 Use this method if the CA trust is configured at the LDAP level using a PEM file. Manually specify the location of a CA certificate file. The following procedure secures LDAP communication not only for the Identity service, but for all applications that use the OpenLDAP libraries. Copy the file containing your CA certificate chain in PEM format to the /etc/openldap/certs directory. Edit /etc/openldap/ldap.conf and add the following directive, replacing [CA_FILE] with the location and name of the CA certificate file: Restart the horizon container: Method 2 Use this method if the CA trust is configured at the LDAP library level using a Network Security Services (NSS) database. Use the certutil command to import and trust a CA certificate into the NSS certificate database used by the OpenLDAP libraries. The following procedure secures LDAP communication not only for the Identity service, but for all applications that use the OpenLDAP libraries. Import and trust the certificate, replacing [CA_FILE] with the location and name of the CA certificate file: Confirm the CA certificate was imported correctly: Your CA certificate is listed, and the trust attributes are set to CT,, . Restart the horizon container: Method 3 Use this method if the CA trust is configured at the Keystone level using a PEM file. The final method of securing communication between the Identity service and an LDAP server is to configure TLS for the Identity service. However, unlike the two methods above, this method secures LDAP communication only for the Identity service and does not secure LDAP communication for other applications that use the OpenLDAP libraries. The following procedure uses the openstack-config command to edit values in the /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf file. Enable TLS: Specify the location of the certificate, replacing [CA_FILE] with the name of the CA certificate: Specify the client certificate checks performed on incoming TLS sessions from the LDAP server, replacing [CERT_BEHAVIOR] with one of the behaviors listed below: demand a certificate will always be requested from the LDAP server. The session will be terminated if no certificate is provided, or if the certificate provided cannot be verified against the existing certificate authorities file. allow a certificate will always be requested from the LDAP server. The session will proceed as normal even if a certificate is not provided. If a certificate is provided but it cannot be verified against the existing certificate authorities file, the certificate will be ignored and the session will proceed as normal. never a certificate will never be requested. Restart the keystone and horizon containers:
[ "CA_NAME=\"WIN2012DOM-WIN2012-CA\" AD_SUFFIX=\"dc=win2012dom,dc=com\" LDAPURL=\"ldap://win2012.win2012dom.com\" ADMIN_DN=\"cn=Administrator,cn=Users,USDAD_SUFFIX\" ADMINPASSWORD=\"MyPassword\" CA_CERT_DN=\"cn=latexmath:[USDCA_NAME,cn=certification authorities,cn=public key services,cn=services,cn=configuration,USD]AD_SUFFIX\" TMP_CACERT=/tmp/cacert.`date +'%Y%m%d%H%M%S'`.USDUSD.pem ldapsearch -xLLL -H latexmath:[USDLDAPURL -D `echo \\\"USD]ADMIN_DN\"`-W -s base -b`echo \"USDCA_CERT_DN\"` objectclass=* cACertificate", "-----BEGIN CERTIFICATE----- MIIDbzCCAlegAwIBAgIQQD14hh1Yz7tPFLXCkKUOszANB... -----END CERTIFICATE-----", "LDAPTLS_CACERT=/path/cacert.pem ldapsearch -xLLL -ZZ -H USDLDAPURL -s base -b \"\" \"objectclass=*\" currenttime", "dn: currentTime: 20141022050611.0Z", "echo Q | openssl s_client -connect USDHOST:USDPORT | sed -n -e '/BEGIN CERTIFICATE/,/END CERTIFICATE/ p'", "TLS_CACERT /etc/openldap/certs/[CA_FILE]", "systemctl restart tripleo_horizon", "certutil -d /etc/openldap/certs -A -n \"My CA\" -t CT,, -a -i [CA_FILE] certutil -d /etc/openldap/certs -A -n \"My CA\" -t CT,, -a -i [CA_FILE]", "certutil -d /etc/openldap/certs -L", "systemctl restart tripleo_horizon", "openstack-config --set /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf ldap use_tls True", "openstack-config --set /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf ldap tls_cacertfile [CA_FILE]", "openstack-config --set /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf ldap tls_req_cert [CERT_BEHAVIOR]", "systemctl restart tripleo_keystone systemctl restart tripleo_horizon" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/users_and_identity_management_guide/assembly_identity
5.3. Uploading VMDK images to vSphere
5.3. Uploading VMDK images to vSphere Image Builder can generate images suitable for uploading to a VMware ESXi or vSphere system. This section describes steps to upload an VMDK image to VMware vSphere. Note Because VMWare deployments typically do not have cloud-init configured to inject user credentials to virtual machines, you have to perform that task yourself on the blueprint. Prerequisites You must have an VMDK image created by Image Builder. Use the vmdk output type in CLI or VMware Virtual Machine Disk (.vmdk) in GUI when creating the image. Procedure 1. Upload the image into vSphere via HTTP. Click on Upload Files in the vCenter: 2. When you create a VM, on the Device Configuration, delete the default New Hard Disk and use the drop-down to select an Existing Hard Disk disk image: Figure 5.3. Virtualization type 3. Make sure you use an IDE device as the Virtual Device Node for the disk you create. The default SCSI value results in an unbootable virtual machine. Figure 5.4. Virtualization type
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter5-section_3
Chapter 19. Configuring automatic crash dumps by using the kdump RHEL System Role
Chapter 19. Configuring automatic crash dumps by using the kdump RHEL System Role To manage kdump using Ansible, you can use the kdump role, which is one of the RHEL System Roles available in RHEL 7.9. Using the kdump role enables you to specify where to save the contents of the system's memory for later analysis. 19.1. The kdump RHEL System Role The kdump System Role enables you to set basic kernel dump parameters on multiple systems. 19.2. kdump role parameters The parameters used for the kdump RHEL System Roles are: Role Variable Description kdump_path The path to which vmcore is written. If kdump_target is not null, path is relative to that dump target. Otherwise, it must be an absolute path in the root file system. Additional resources The makedumpfile(8) man page. For details about the parameters used in kdump and additional information about the kdump System Role, see the /usr/share/ansible/roles/rhel-system-roles.tlog/README.md file. 19.3. Configuring kdump using RHEL System Roles You can set basic kernel dump parameters on multiple systems using the kdump System Role by running an Ansible playbook. Warning The kdump role replaces the kdump configuration of the managed hosts entirely by replacing the /etc/kdump.conf file. Additionally, if the kdump role is applied, all kdump settings are also replaced, even if they are not specified by the role variables, by replacing the /etc/sysconfig/kdump file. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. You have an inventory file which lists the systems on which you want to deploy kdump . Procedure Create a new playbook.yml file with the following content: --- - hosts: kdump-test vars: kdump_path: /var/crash roles: - rhel-system-roles.kdump Optional: Verify playbook syntax. Run the playbook on your inventory file: Additional resources For a detailed reference on kdump role variables, see the README.md or README.html files in the /usr/share/doc/rhel-system-roles/kdump directory. See Preparing the control node and managed nodes to use RHEL System Roles Documentation installed with the rhel-system-roles package /usr/share/ansible/roles/rhel-system-roles.kdump/README.html
[ "--- - hosts: kdump-test vars: kdump_path: /var/crash roles: - rhel-system-roles.kdump", "ansible-playbook --syntax-check playbook.yml", "ansible-playbook -i inventory_file /path/to/file/playbook.yml" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/configuring-automatic-crash-dumps-by-using-the-kdump-rhel-system-role_automating-system-administration-by-using-rhel-system-roles
8.3. Interface Control Scripts
8.3. Interface Control Scripts The interface control scripts activate and deactivated system interfaces. There are two primary interface control scripts, /sbin/ifdown and /sbin/ifup , that call on control scripts located in the /etc/sysconfig/network-scripts/ directory. The ifup and ifdown interface scripts are symbolic links to scripts in the /sbin/ directory. When either of these scripts are called, they require the value of the interface to be specified, such as: Warning The ifup and ifdown interface scripts are the only scripts that the user should use to bring up and take down network interfaces. The following scripts are described for reference purposes only. Two files used to perform a variety of network initialization tasks during the process of bringing up a network interface are /etc/rc.d/init.d/functions and /etc/sysconfig/network-scripts/network-functions . Refer to Section 8.4, "Network Function Files" for more information. After verifying that an interface has been specified and that the user executing the request is allowed to control the interface, the correct script brings the interface up or down. The following are common interface control scripts found within the /etc/sysconfig/network-scripts/ directory: ifup-aliases - Configures IP aliases from interface configuration files when more than one IP address is associated with an interface. ifup-ippp and ifdown-ippp - Brings ISDN interfaces up and down. ifup-ipsec and ifdown-ipsec - Brings IPsec interfaces up and down. ifup-ipv6 and ifdown-ipv6 - Brings IPv6 interfaces up and down. ifup-ipx - Brings up an IPX interface. ifup-plip - Brings up a PLIP interface. ifup-plusb - Brings up a USB interface for network connections. ifup-post and ifdown-post - Contains commands to be executed after an interface is brought up or down. ifup-ppp and ifdown-ppp - Brings a PPP interface up or down. ifup-routes - Adds static routes for a device as its interface is brought up. ifdown-sit and ifup-sit - Contains function calls related to bringing up and down an IPv6 tunnel within an IPv4 connection. ifup-sl and ifdown-sl - Brings a SLIP interface up or down. ifup-wireless - Brings up a wireless interface. Warning Removing or modifying any scripts in the /etc/sysconfig/network-scripts/ directory can cause interface connections to act irregularly or fail. Only advanced users should modify scripts related to a network interface. The easiest way to manipulate all network scripts simultaneously is to use the /sbin/service command on the network service ( /etc/rc.d/init.d/network ), as illustrated the following command: In this example, <action> can be either start , stop , or restart . To view a list of configured devices and currently active network interfaces, use the following command:
[ "ifup eth0", "service network <action>", "service network status" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-networkscripts-control
7.2. Certificate System Packages
7.2. Certificate System Packages When installing the Certificate System packages you can either install them for each subsystem individually or all at once. Important To install and update Certificate Server packages, you must enable the corresponding repository. For details, see Section 6.8, "Attaching a Red Hat Subscription and Enabling the Certificate System Package Repository" . The following subsystem packages and components are available in Red Hat Certificate System: pki-ca : Provides the Certificate Authority (CA) subsystem. pki-kra : Provides the Key Recovery Authority (KRA) subsystem. pki-ocsp : Provides the Online Certificate Status Protocol (OCSP) responder. pki-tks : Provides the Token Key Service (TKS). pki-tps : Provides the Token Processing Service (TPS). pki-console and redhat-pki-console-theme : Provides the Java-based Red Hat PKI console. Both packages must be installed. pki-server and redhat-pki-server-theme : Provides the web-based Certificate System interface. Both packages must be installed. This package is installed as a dependency if you install one of the following packages: pki-ca , pki-kra , pki-ocsp , pki-tks , pki-tps 7.2.1. Installing Certificate System Packages With the redhat-pki module, you can install all Certificate System subsystem packages and components at once on a RHEL 8 system. The redhat-pki module installs the five subsystems of Red Hat Certificate System: in addition to the pki-core module (CA, KRA) which is part of Red Hat Identity Management (IdM), includes the RHCS-specific subsystems (OCSP, TKS and TPS) as well as the pki-deps module that takes care of the required dependencies. Alternatively, you can install packages separately. For example, to install the CA subsystem and the optional web interface: For other subsystems, replace the pki-ca package name with the one of the subsystem you want to install. If you require the optional PKI console: Note The pkiconsole tool will be deprecated. 7.2.2. Updating Certificate System Packages To update Certificate System and operating system packages, use the following procedure: Follow instructions in Section 7.2.3, "Determining Certificate System Product Version" to check the product version. Execute # yum update The command above updates the whole system including the RHCS packages. Note We suggest scheduling a maintenance window during which you can take the PKI infrastructure offline to install the update. Important Updating Certificate System requires the PKI infrastructure to be restarted. Then check version again by following Section 7.2.3, "Determining Certificate System Product Version" . The version number should confirm that the update was successfully installed. To optionally download updates without installing, use the --downloadonly option in the above procedure: The downloaded packages are stored in the /var/cache/yum/ directory. The yum update will later use the packages if they are the latest versions. 7.2.3. Determining Certificate System Product Version The Red Hat Certificate System product version is stored in the /usr/share/pki/CS_SERVER_VERSION file. To display the version: To find the product version of a running server, access the following URLs from your browser: http:// host_name : port_number /ca/admin/ca/getStatus http:// host_name : port_number /kra/admin/kra/getStatus http:// host_name : port_number /ocsp/admin/ocsp/getStatus http:// host_name : port_number /tks/admin/tks/getStatus http:// host_name : port_number /tps/admin/tps/getStatus Note Note that each component is a separate package and thus could have a separate version number. The above will show the version number for each currently running component.
[ "yum install redhat-pki", "yum install pki-ca redhat-pki-server-theme", "yum install pki-console redhat-pki-console-theme", "update --downloadonly", "cat /usr/share/pki/CS_SERVER_VERSION Red Hat Certificate System 10.0 (Batch Update 1)" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/certificate_system_packages
Preface
Preface You can extend Red Hat Ansible Automation Platform to connect to automation services catalog on cloud.redhat.com by using the Ansible Automation Platform setup or setup bundle installer. This is a Day 2 activity and requires setup of a service account that has write permissions on all basic resources and objects in automation controller: organizations, users, projects, job templates and inventories. The Catalog worker requires a set of variables assigned to hosts in your Red Hat Ansible Automation Platform network. Running the Catalog worker will create an application and an application token, install the necessary packages, and start the service. Important Support for automation services catalog is no longer available for Ansible Automation Platform from 2.4.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/installing_the_automation_services_catalog_worker/pr01
Preface
Preface Enterprise Contract is a policy-driven workflow tool for maintaining software supply chain security by defining and enforcing policies for building and testing container images. A secure CI/CD workflow should include artifact verification to detect problems early. It's the job of Enterprise Contract to validate that a container image is signed and attested by a known and trusted build system.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/managing_compliance_with_enterprise_contract/pr01
Chapter 3. New features
Chapter 3. New features This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage. 3.1. The Cephadm utility The Cephadm-Ansible modules The Cephadm-Ansible package provides several modules that wrap the new integrated control plane, cephadm , for those users that wish to manage their entire datacenter with Ansible. It does not intend to provide backward compatibility with Ceph-Ansible , but it aims to deliver a supported set of playbooks that customers can use to update their Ansible integration. See The cephadm-ansible modules for more details. Bootstrap Red Hat Ceph Storage cluster is supported on Red Hat Enterprise Linux 9 With this release, cephadm bootstrap is available on Red Hat Enterprise Linux 9 hosts to enable Red Hat Ceph Storage 5.2 support for Red Hat Enterprise Linux 9. Users can now bootstrap a Ceph cluster on Red Hat Enterprise Linux 9 hosts. cephadm rm-cluster command cleans up the old systemd unit files from host Previously, the rm-cluster command would tear down the daemons without removing the systemd unit files. With this release, cephadm rm-cluster command , along with purging the daemons, cleans up the old systemd unit files as well from the host. cephadm raises health warnings if it fails to apply a specification Previously, failures to apply a specification would only be reported as a service event which the users would often not check. With this release, cephadm raises health warnings if it fails to apply a specification, such as an incorrect pool name in an iscsi specification, to alert users. Red Hat Ceph Storage 5.2 supports staggered upgrade Starting with Red Hat Ceph Storage 5.2, you can selectively upgrade large Ceph clusters in cephadm in multiple smaller steps. The ceph orch upgrade start command accepts the following parameters: --daemon-types --hosts --services --limit These parameters selectively upgrade daemons that match the provided values. Note These parameters are rejected if they cause cephadm to upgrade daemons out of the supported order. Note These upgrade parameters are accepted if your active Ceph Manager daemon is on a Red Hat Ceph Storage 5.2 build. Upgrades to Red Hat Ceph Storage 5.2 from an earlier version does not support these parameters. fs.aio-max-nr is set to 1048576 on hosts with OSDs Previously, leaving fs.aio-max-nr as the default value of 65536 on hosts managed by Cephadm could cause some OSDs to crash. With this release, fs.aio-max-nr is set to 1048576 on hosts with OSDs and OSDs no longer crash as a result of the value of fs.aio-max-nr parameter being too low. ceph orch rm <service-name> command informs users if the service they attempted to remove exists or not Previously, removing a service would always return a successful message, even for non-existent services causing confusion among users. With this release, running ceph orch rm SERVICE_NAME command informs users if the service they attempted to remove exists or not in cephadm . A new playbook rocksdb-resharding.yml for resharding procedure is now available in cephadm-ansible Previously, the rocksdb resharding procedure entailed tedious manual steps. With this release, the cephadm-ansible playbook, rocksdb-resharding.yml is implemented for enabling rocksdb resharding which makes the process easy. cephadm now supports deploying OSDs without an LVM layer With this release, to support users who do not want a LVM layer for their OSDs, cephadm or ceph-volume support is provided for raw OSDs. You can include "method: raw" in an OSD specification file passed to Cephadm, to deploy OSDs in raw mode through Cephadm without the LVM layer. With this release, cephadm supports using method: raw in the OSD specification yaml file to deploy OSDs in raw mode without an LVM layer. See Deploying Ceph OSDs on specific devices and hosts for more details. 3.2. Ceph Dashboard Start, stop, restart, and redeploy actions can be performed on underlying daemons of services Previously, orchestrator services could only be created, edited, and deleted. No action could be performed on the underlying daemons of the services With this release, actions such as starting, stopping, restarting, and redeploying can be performed on the underlying daemons of orchestrator services. OSD page and landing page on the Ceph Dashboard displays different colours in the usage bar of OSDs Previously, whenever an OSD would reach near full or full status, the cluster health would change to WARN or ERROR status, but from the landing page, there was no other sign of failure. With this release, when an OSD turns to near full ratio or full, the OSD page for that particular OSD, as well as the landing page, displays different colours in the usage bar. Dashboard displays onode hit or miss counters With this release, the dashboard provides details pulled from the Bluestore stats, to display the onode hit or miss counters to help you deduce whether increasing the RAM per OSD could help improve the cluster performance. Users can view the CPU and memory usage of a particular daemon With this release, you can see CPU and memory usage of a particular daemon on the Red Hat Ceph Storage Dashboard under Cluster > Host> Daemons. Improved Ceph Dashboard features for rbd-mirroring With this release, the RBD Mirroring tab on the Ceph Dashboard is enhanced with the following features that were previously present only in the command-line interface (CLI): Support for enabling or disabling mirroring in images. Support for promoting and demoting actions. Support for resyncing images. Improve visibility for editing site names and create bootstrap key. A blank page consisting a button to automatically create an rbd-mirror appears if none exist. User can now create OSDs in simple and advanced mode on the Red Hat Ceph Storage Dashboard With this release, to simplify OSD deployment for the clusters with simpler deployment scenarios, "Simple" and "Advanced" modes for OSD Creation are introduced. You can now choose from three new options: Cost/Capacity-optimized: All the available HDDs are used to deploy OSDs. Throughput-optimized: HDDs are supported for data devices and SSDs for DB/WAL devices. IOPS-optimized: All the available NVMEs are used as data devices. See Management of OSDs using the Ceph Orchestrator for more details. Ceph Dashboard Login page displays customizable text Corporate users want to ensure that anyone accessing their system is acknowledged and is committed to comply with the legal/security terms. With this release, a placeholder is provided on the Ceph Dashboard login page, to display a customized banner or warning text. The Ceph Dashboard admin can set, edit, or delete the banner with the following commands: Example When enabled, the Dashboard login page displays a customizable text. Major version number and internal Ceph version is displayed on the Ceph Dashboard With this release, along with the major version number, the internal Ceph version is also displayed on the Ceph Dashboard, to help users relate Red Hat Ceph Storage downstream releases to Ceph internal versions. For example, Version: 16.2.9-98-gccaadd . Click the top navigation bar, click the question mark menu (?), and navigate to the About modal box to identify the Red Hat Ceph Storage release number and the corresponding Ceph version. 3.3. Ceph File System New capabilities are available for CephFS subvolumes in ODF configured in external mode If CephFS in ODF is configured in external mode, users like to use volume/subvolume metadata to store some Openshift specific metadata information, such as the PVC/PV/namespace from the volumes/subvolumes. With this release, the following capabilities to set, get, update, list, and remove custom metadata from CephFS subvolume are added. Set custom metadata on the subvolume as a key-value pair using: Syntax Get custom metadata set on the subvolume using the metadata key: Syntax List custom metadata, key-value pairs, set on the subvolume: Syntax Remove custom metadata set on the subvolume using the metadata key: Syntax Reason for clone failure shows up when using clone status command Previously, whenever a clone failed, the only way to check the reason for failure was by looking into the logs. With this release, the reason for clone failure is shown in the output of the clone status command: Example The reason for a clone failure is shown in two fields: errno : error number error_msg : failure error string 3.4. Ceph Manager plugins CephFS NFS export can be dynamically updated using the ceph nfs export apply command Previously, when updating a CephFS NFS export, the NFS-Ganesha servers were always restarted. This temporarily affected all the client connections served by the ganesha servers including those exports that were not updated. With this release, a CephFS NFS export can now be dynamically updated using the ceph nfs export apply command. The NFS servers are no longer restarted every time a CephFS NFS export is updated. 3.5. The Ceph Volume utility Users need not manually wipe devices prior to redeploying OSDs Previously, users were forced to manually wipe devices prior to redeploying OSDs. With this release, post zapping, physical volumes on devices are removed when no volume groups or logical volumes are remaining, thereby users are not forced to manually wipe devices anymore prior to redeploying OSDs. 3.6. Ceph Object Gateway Ceph Object Gateway can now be configured to direct its Ops Log to an ordinary Unix file. With this release, Ceph Object Gateway can be configured to direct its Ops Log to an ordinary Unix file, as a file-based log is simpler to work with in some sites, when compared to a Unix domain socket. The content of the log file is identical to what would be sent to the Ops Log socket in the default configuration. Use the radosgw lc process command to process a single bucket's lifecycle With this release, users can now use the radosgw-admin lc process command to process only a single bucket's lifecycle from the command line interface by specifying its name --bucket or ID --bucket-id , as processing the lifecycle for a single bucket is convenient in many situations, such as debugging. User identity information is added to the Ceph Object Gateway Ops Log output With this release, user identity information is added to the Ops Log output to enable customers to access this information for auditing of S3 access. User identities can be reliably tracked by S3 request in all versions of the Ceph Object Gateway Ops Log. Log levels for Ceph Object Gateway's HTTP access logging can be controlled independently with debug_rgw_access parameter With this release, log levels for Ceph Object Gateway's HTTP access logging can be controlled independently with the debug_rgw_access parameter to provide users the ability to disable all Ceph Object Gateway logging such as debug_rgw=0 except for these HTTP access log lines. Level 20 Ceph Object Gateway log messages are reduced when updating bucket indices With this release, the Ceph Object Gateway level 20 log messages are reduced when updating bucket indices to remove messages that do not add value and to reduce size of logs. 3.7. Multi-site Ceph Object Gateway current_time field is added to the output of several radosgw-admin commands With this release, current_time field is added to several radosgw-admin commands, specifically sync status , bucket sync status , metadata sync status , data sync status , and bilog status . Logging of HTTP client Previously, Ceph Object Gateway was neither printing error bodies of HTTP responses nor was there a way to match the request to the response. With this release, a more thorough logging of HTTP client is implemented by maintaining a tag to match a HTTP request to a HTTP response for the async HTTP client and error bodies. When the Ceph Object Gateway debug is set to twenty, error bodies and other details are printed. Read-only role for OpenStack Keystone is now available The OpenStack Keystone service provides three roles: admin , member , and reader . In order to extend the role based access control (RBAC) capabilities to OpenStack, a new read-only Admin role can now be assigned to specific users in the Keystone service. The support scope for RBAC is based on the OpenStack release. 3.8. Packages New version of grafana container provides security fixes and improved functionality With this release, a new version of grafana container, rebased with grafana v8.3.5 is built, which provides security fixes and improved functionality. 3.9. RADOS MANY_OBJECTS_PER_PG warning is no longer reported when pg_autoscale_mode is set to on Previously, Ceph health warning MANY_OBJECTS_PER_PG was reported in instances where pg_autoscale_mode was set to on with no distinction between the different modes that reported the health warning. With this release, a check is added to omit reporting MANY_OBJECTS_PER_PG warning when pg_autoscale_mode is set to on . OSDs report the slow operations details in an aggregated format to the Ceph Manager service Previously, slow requests would overwhelm a cluster log with too many details, filling up the monitor database. With this release, slow requests get logged in the cluster log by operation type and pool information and is based on OSDs reporting aggregated slow operations details to the manager service. Users can now blocklist a CIDR range With this release, you can blocklist a CIDR range, in addition to individual client instances and IPs. In certain circumstances, you would want to blocklist all clients in an entire data center or rack instead of specifying individual clients to blocklist. For example, failing over a workload to a different set of machines and wanting to prevent the old workload instance from continuing to partially operate. This is now possible using a "blocklist range" analogous to the existing "blocklist" command. 3.10. The Ceph Ansible utility A new Ansible playbook is now available for backup and restoring Ceph files Previously, users had to manually backup and restore files when either upgrading the OS from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 or reprovisioning their machines, which was quite inconvenient especially in case of large cluster deployments. With this release, the backup-and-restore-ceph-files.yml playbook is added to backup and restore Ceph files, such as /etc/ceph and /var/lib/ceph that eliminates the need for the user to manually restore files.
[ "ceph dashboard set-login-banner -i filename.yaml ceph dashboard get-login-banner ceph dashboard unset-login-banner", "ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group-name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group-name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group-name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group-name SUBVOLUME_GROUP_NAME ] [--force]", "ceph fs clone status cephfs clone1 { \"status\": { \"state\": \"failed\", \"source\": { \"volume\": \"cephfs\", \"subvolume\": \"subvol1\", \"snapshot\": \"snap1\" \"size\": \"104857600\" }, \"failure\": { \"errno\": \"122\", \"errstr\": \"Disk quota exceeded\" } } }" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.2_release_notes/enhancements
Chapter 6. Configuring Load-balancing service flavors
Chapter 6. Configuring Load-balancing service flavors Load-balancing service (octavia) flavors are sets of provider configuration options that you create. When users request load balancers, they can specify that the load balancer be built using one of the defined flavors. You define a flavor for each load-balancing provider driver, which exposes the unique capabilities of the respective provider. To create a new Load-balancing service flavor: Decide which capabilities of the load-balancing provider you want to configure in the flavor. Create the flavor profile with the flavor capabilities you have chosen. Create the flavor. Section 6.1, "Listing Load-balancing service provider capabilities" Section 6.2, "Defining flavor profiles" Section 6.3, "Creating Load-balancing service flavors" 6.1. Listing Load-balancing service provider capabilities You can review the list of capabilities that each Load-balancing service (octavia) provider driver exposes. Procedure Source a credentials file that gives you access to the overcloud with the RHOSP admin role. Example List the capabilities for each driver: Replace <provider> with the name or UUID of the provider. Example The command output lists all of the capabilities that the provider supports. Sample output Note the names of the capabilities that you want to include in the flavor that you are creating. Additional resources Section 6.2, "Defining flavor profiles" loadbalancer provider capability list in the Command Line Interface Reference 6.2. Defining flavor profiles Load-balancing service (octavia) flavor profiles contain the provider driver name and a list of capabilities. You use flavor profiles to create flavors that users specify to create a load balancer. Prerequisites You must know which load-balancing provider and which of its capabilities you want to include in the flavor profile. Procedure Source a credentials file that gives you access to the overcloud with the RHOSP admin role. Example Create a flavor profile: Example Sample output In this example, a flavor profile is created for the amphora provider. When this profile is specified in a flavor, the load balancer that users create by using the flavor is a single amphora load balancer. Verification When you create a flavor profile, the Load-balancing service validates the flavor values with the provider to ensure that the provider can support the capabilities that you have specified. Additional resources Section 6.3, "Creating Load-balancing service flavors" loadbalancer flavorprofile create in the Command Line Interface Reference 6.3. Creating Load-balancing service flavors You create a user-facing flavor for the Load-balancing service (octavia) by using a flavor profile. The name that you assign to the flavor is the value that users specify when they create a load balancer. Prerequisites You must have created a flavor profile. Procedure Source a credentials file that gives you access to the overcloud with the RHOSP admin role. Example Create a flavor: Tip Provide a detailed description so that users can understand the capabilities of the flavor that you are providing. Example Sample output In this example, a flavor has been defined. When users specify this flavor, they create a load balancer that uses one Load-balancing service instance (amphora) and is not highly available. Note Disabled flavors are still visible to users, but users cannot use the disabled flavor to create a load balancer. Additional resources Section 6.2, "Defining flavor profiles" loadbalancer flavor create in the Command Line Interface Reference
[ "source ~/my_overcloudrc", "openstack loadbalancer provider capability list <provider>", "openstack loadbalancer provider capability list amphora", "+-----------------------+---------------------------------------------------+ | name | description | +-----------------------+---------------------------------------------------+ | loadbalancer_topology | The load balancer topology. One of: SINGLE - One | | | amphora per load balancer. ACTIVE_STANDBY - Two | | | amphora per load balancer. | | ... | ... | +-----------------------+---------------------------------------------------+", "source ~/my_overcloudrc", "openstack loadbalancer flavorprofile create --name <profile_name> --provider <provider_name> --flavor-data '{\"<capability>\": \"<value>\"}'", "openstack loadbalancer flavorprofile create --name amphora-single-profile --provider amphora --flavor-data '{\"loadbalancer_topology\": \"SINGLE\"}'", "+---------------+--------------------------------------+ | Field | Value | +---------------+--------------------------------------+ | id | 72b53ac2-b191-48eb-8f73-ed012caca23a | | name | amphora-single-profile | | provider_name | amphora | | flavor_data | {\"loadbalancer_topology\": \"SINGLE\"} | +---------------+--------------------------------------+", "source ~/my_overcloudrc", "openstack loadbalancer flavor create --name <flavor_name> --flavorprofile <flavor-profile> --description \"<string>\"", "openstack loadbalancer flavor create --name standalone-lb --flavorprofile amphora-single-profile --description \"A non-high availability load balancer for testing.\"", "+-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | id | 25cda2d8-f735-4744-b936-d30405c05359 | | name | standalone-lb | | flavor_profile_id | 72b53ac2-b191-48eb-8f73-ed012caca23a | | enabled | True | | description | A non-high availability load | | | balancer for testing. | +-------------------+--------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/using_octavia_for_load_balancing-as-a-service/config-lb-service-flavors_rhosp-lbaas
Managing hybrid and multicloud resources
Managing hybrid and multicloud resources Red Hat OpenShift Data Foundation 4.9 Instructions for how to manage storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa). Red Hat Storage Documentation Team Abstract This document explains how to manage storage resources across a hybrid cloud or multicloud environment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. Chapter 2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. For information on accessing the RADOS Object Gateway (RGW) S3 endpoint, see Accessing the RADOS Object Gateway S3 endpoint . Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. You can access the relevant endpoint, access key, and secret access key in two ways: Section 2.1, "Accessing the Multicloud Object Gateway from the terminal" Section 2.2, "Accessing the Multicloud Object Gateway from the MCG command-line interface" Example 2.1. Example Accessing the MCG bucket(s) using the virtual-hosted style If the client application tries to access https:// <bucket-name> .s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com <bucket-name> is the name of the MCG bucket For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com A DNS entry is needed for mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com to point to the S3 Service. Important Ensure that you have a DNS entry in order to point the client application to the MCG bucket(s) using the virtual-hosted style. 2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint 2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the MCG command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You now have the relevant endpoint, access key, and secret access key in order to connect to your applications. Example 2.2. Example If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation: Chapter 3. Allowing user access to the Multicloud Object Gateway Console To allow access to the Multicloud Object Gateway (MCG) Console to a user, ensure that the user meets the following conditions: User is in cluster-admins group. User is in system:cluster-admins virtual group. Prerequisites A running OpenShift Data Foundation Platform. Procedure Enable access to the MCG console. Perform the following steps once on the cluster : Create a cluster-admins group. Bind the group to the cluster-admin role. Add or remove users from the cluster-admins group to control access to the MCG console. To add a set of users to the cluster-admins group : where <user-name> is the name of the user to be added. Note If you are adding a set of users to the cluster-admins group, you do not need to bind the newly added users to the cluster-admin role to allow access to the OpenShift Data Foundation dashboard. To remove a set of users from the cluster-admins group : where <user-name> is the name of the user to be removed. Verification steps On the OpenShift Web Console, login as a user with access permission to Multicloud Object Gateway Console. Navigate to Storage -> OpenShift Data Foundation . In the Storage Systems tab, select the storage system and then click Overview -> Object tab. Select the Multicloud Object Gateway link. Click Allow selected permissions . Chapter 4. Adding storage resources for hybrid or Multicloud 4.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Enter an Endpoint . This is optional. Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 4.2, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Backing Store tab to view all the backing stores. 4.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across the cloud provider and clusters. Add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 4.2.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 4.2.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 4.2.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 4.2.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 4.2.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 4.3, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 4.2.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 4.2.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <bucket-name> with an existing IBM COS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <endpoint> with a regional endpoint that corresponds to the location of the existing IBM bucket name. This argument tells Multicloud Object Gateway which endpoint to use for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 4.2.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> with an AZURE account key and account name you created for this purpose. Replace <blob container name> with an existing Azure blob container name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <blob-container-name> with an existing Azure blob container name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 4.2.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <PATH TO GCP PRIVATE KEY JSON FILE> with a path to your GCP private key created for this purpose. Replace <GCP bucket name> with an existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own GCP service account private key using Base64, and use the results in place of <GCP PRIVATE KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <target bucket> with an existing Google storage bucket. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 4.2.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <NUMBER OF VOLUMES> with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. Replace <VOLUME SIZE> with the required size, in GB, of each volume. Replace <LOCAL STORAGE CLASS> with the local storage class, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: You can also add storage resources using a YAML: Apply the following YAML for a specific backing store: Replace <backingstore_name> with the name of the backingstore. Replace <NUMBER OF VOLUMES> with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. Replace <VOLUME SIZE> with the required size, in GB, of each volume. Note that the letter G should remain. Replace <LOCAL STORAGE CLASS> with the local storage class, recommended to use ocs-storagecluster-ceph-rbd . 4.3. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 4.4. Adding storage resources for hybrid and Multicloud using the user interface Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Storage Systems tab, select the storage system and then click Overview -> Object tab. Select the Multicloud Object Gateway link. Select the Resources tab in the left, highlighted below. From the list that populates, select Add Cloud Resource . Select Add new connection . Select the relevant native cloud provider or S3 compatible option and fill in the details. Select the newly created connection and map it to the existing bucket. Repeat these steps to create as many backing stores as needed. Note Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI. 4.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class (OBC). Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Bucket Class tab and search the new Bucket Class. 4.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 4.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, clear the name of the backing store. Click Save . Chapter 5. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 5.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Red Hat OpenShift Data Foundation 4.6 onwards supports the following namespace bucket operations: ListObjectVersions ListObjects PutObject CopyObject ListParts CreateMultipartUpload CompleteMultipartUpload UploadPart UploadPartCopy AbortMultipartUpload GetObjectAcl GetObject HeadObject DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 5.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 5.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites A running OpenShift Data Foundation Platform Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: Replace <resource-name> with the name you want to give to the resource. Replace <namespacestore-secret-name> with the secret created in step 1. Replace <namespace-secret> with the namespace where the secret can be found. Replace <target-bucket> with the target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: Replace <my-bucket-class> with a unique namespace bucket class name. Replace <resource> with the name of a single namespace-store that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with the name of a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of the names of the namespace-stores that defines the read targets of the namespace bucket. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <resource-name> with the name you want to give to the resource. Replace <my-bucket> with the name you want to give to the bucket. Replace <my-bucket-class> with the bucket class created in the step. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 5.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <namespacestore-secret-name> with the secret created in step 1. Replace <namespace-secret> with the namespace where the secret can be found. Replace <target-bucket> with the target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: Replace <my-bucket-class> with a unique namespace bucket class name. Replace <resource> with a the name of a single namespace-store that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with the name of a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of the names of namespace-stores that defines the read targets of the namespace bucket. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <resource-name> with the name you want to give to the resource. Replace <my-bucket> with the name you want to give to the bucket. Replace <my-bucket-class> with the bucket class created in the step. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 5.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . Run the following command to create a namespace bucket class with a namespace policy of type single : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <resource> with a single namespace-store that defines the read and write target of the namespace bucket. Run the following command to create a namespace bucket class with a namespace policy of type multi : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <bucket-name> with a bucket name of your choice. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 5.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . Run the following command to create a namespace bucket class with a namespace policy of type single : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <resource> with a single namespace-store that defines the read and write target of the namespace bucket. Run the following command to create a namespace bucket class with a namespace policy of type multi : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <bucket-name> with a bucket name of your choice. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 5.3. Adding a namespace bucket using the OpenShift Container Platform user interface With the release of OpenShift Data Foundation 4.8, namespace buckets can be added using the OpenShift Container Platform user interface. For more information about namespace buckets, see Managing namespace buckets . Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage -> OpenShift Data Foundation. Click the Namespace Store tab to create a namespacestore resources to be used in the namespace bucket. Click Create namespace store . Enter a namespacestore name. Choose a provider. Choose a region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Choose a target bucket. Click Create . Verify the namespacestore is in the Ready state. Repeat these steps until you have the desired amount of resources. Click the Bucket Class tab -> Create a new Bucket Class . Select the Namespace radio button. Enter a Bucket Class name. Add a description (optional). Click . Choose a namespace policy type for your namespace bucket, and then click . Select the target resource(s). If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Click . Review your new bucket class, and then click Create Bucketclass . On the BucketClass page, verify that your newly created resource is in the Created phase. In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click Multicloud Object Gateway -> Buckets -> Namespace Buckets tab . Click Create Namespace Bucket . On the Choose Name tab, specify a Name for the namespace bucket and click . On the Set Placement tab: Under Read Policy , select the checkbox for each namespace resource created in step 5 that the namespace bucket should read data from. If the namespace policy type you are using is Multi , then Under Write Policy , specify which namespace resource the namespace bucket should write data to. Click . Click Create . Verification Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name. Chapter 6. Mirroring data for hybrid and Multicloud buckets The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. Prerequisites You must first add a backing storage that can be used by the MCG, see Chapter 4, Adding storage resources for hybrid or Multicloud . Then you create a bucket class that reflects the data management policy, mirroring. Procedure You can set up mirroring data in three ways: Section 6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 6.2, "Creating bucket classes to mirror data using a YAML" Section 6.3, "Configuring buckets to mirror data using the user interface" 6.1. Creating bucket classes to mirror data using the MCG command-line-interface From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim, generating a new bucket that will be mirrored between two locations: 6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Chapter 9, Object Bucket Claim . 6.3. Configuring buckets to mirror data using the user interface In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click the Multicloud Object Gateway link. On the NooBaa page, click the buckets icon on the left side. You can see a list of your buckets: Click the bucket you want to update. Click Edit Tier 1 Resources : Select Mirror and check the relevant resources you want to use for this bucket. In the following example, the data between noobaa-default-backing-store which is on RGW and AWS-backingstore which is on AWS is mirrored: Click Save . Note Resources created in NooBaa UI cannot be used by OpenShift UI or Multicloud Object Gateway (MCG) CLI. Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. About bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. See the following example: There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . Instructions for creating S3 users can be found in Section 7.3, "Creating an AWS S3 user in the Multicloud Object Gateway" . Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. 7.3. Creating an AWS S3 user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click the Multicloud Object Gateway link. Under the Accounts tab, click Create Account . Select S3 Access Only , provide the Account Name , for example, [email protected] . Click . Select S3 default placement , for example, noobaa-default-backing-store . Select Buckets Permissions . A specific bucket or all buckets can be selected. Click Create . Chapter 8. Multicloud Object Gateway bucket and bucket class replication Data replication between buckets provides higher resiliency and better collaboration options. These buckets can be either data buckets backed by any supported storage solution (S3, Azure, and so on), or namespace buckets (where PV Pool and GCP are not supported). For more information on how to create a backingstore, see Adding storage resources for hybrid or Multicloud using the MCG command line interface . For more information on how to create a namespacestore, see Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML . A bucket replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on a second bucket results in bi-directional replication. Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface. Important Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Power use the following command: Alternatively, you can install the mcg package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Important Choose the correct Product Variant according to your architecture. Note Certain MCG features are only available in certain MCG versions, and the appropriate MCG CLI tool version must be used to fully utilize MCG's features. To replicate a bucket, see Replicating a bucket to another bucket . To set a bucket class replication policy, see Setting a bucket class replication policy . 8.1. Replicating a bucket to another bucket You can set the bucket replication policy in two ways: Replicating a bucket to another bucket using the MCG command-line interface . Replicating a bucket to another bucket using a YAML . 8.1.1. Replicating a bucket to another bucket using the MCG command-line interface Applications that require a Multicloud Object Gateway (MCG) bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and define the replication policy parameter in a JSON file. Procedure From the MCG command-line interface, run the following command to create an OBC with a specific replication policy: <bucket-claim-name> Specify the name of the bucket claim. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Example 8.1. Example 8.1.2. Replicating a bucket to another bucket using a YAML Applications that require a Multicloud Object Gateway (MCG) data bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and add the spec.additionalConfig.replication-policy parameter to the OBC. Procedure Apply the following YAML: <desired-bucket-claim> Specify the name of the bucket claim. <desired-namespace> Specify the namespace. <desired-bucket-name> Specify the prefix of the bucket name. "rule_id" Specify the ID number of the rule, for example, {"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Additional information For more information about OBCs, see Object Bucket Claim . 8.2. Setting a bucket class replication policy It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways: Setting a bucket class replication policy using the MCG command-line interface . Setting a bucket class replication policy using a YAML . 8.2.1. Setting a bucket class replication policy using the MCG command-line interface Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucketclass and define the replication-policy parameter in a JSON file. It is possible to set a bucket class replication policy for two types of bucket classes: Placement Namespace Procedure From the MCG command-line interface, run the following command: <bucketclass-name> Specify the name of the bucket class. <backingstores> Specify the name of a backingstore. It is possible to pass several backingstores separated by commas. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Example 8.2. Example This example creates a placement bucket class with a specific replication policy defined in the JSON file. 8.2.2. Setting a bucket class replication policy using a YAML Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucket class using the spec.replicationPolicy field. Procedure Apply the following YAML: This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket . <desired-app-label> Specify a label for the app. <desired-bucketclass-name> Specify the bucket class name. <desired-namespace> Specify the namespace in which the bucket class gets created. <backingstore> Specify the name of a backingstore. It is possible to pass several backingstores. "rule_id" Specify the ID number of the rule, for example, `{"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Chapter 9. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 9.1, "Dynamic Object Bucket Claim" Section 9.2, "Creating an Object Bucket Claim using the command line interface" Section 9.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 9.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. You can add more lines to the YAML file to automate the use of the OBC. The example below is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 9.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . Example output: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: Example output: Run the following command to view the YAML file for the new OBC: Example output: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: Example output: The secret gives you the S3 access credentials. Run the following command to view the configuration map: Example output: The configuration map contains the S3 endpoint information for your application. 9.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 9.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Bucket Claims -> Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page: Additional Resources Chapter 9, Object Bucket Claim 9.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . Additional Resources Chapter 9, Object Bucket Claim 9.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Buckets . Alternatively, you can also navigate to the details page of a specific OBC and click the Resource link to view the object buckets for that OBC. Select the object bucket you want to see details for. You are navigated to the Object Bucket Details page. Additional Resources Chapter 9, Object Bucket Claim 9.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . Additional Resources Chapter 9, Object Bucket Claim Chapter 10. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway (MCG) bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. AWS S3 IBM COS 10.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. In case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Chapter 11. Scaling Multicloud Object Gateway performance by adding endpoints The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service S3 endpoint service The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG. 11.1. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. 11.2. Scaling the Multicloud Object Gateway with storage nodes Prerequisites A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG). A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure Log in to OpenShift Web Console . From the MCG user interface, click Overview -> Add Storage Resources . In the window, click Deploy Kubernetes Pool . In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources -> Storage resources -> Resource name . Chapter 12. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. Prerequisites A running OpenShift Data Foundation Platform Procedure Run oc get service command to get the RGW service name. Run oc expose command to expose the RGW service. Replace <RGW-service name> with the RGW service name from the step. Replace <route name> with a route you want to create for the RGW service. For example: Run oc get route command to confirm oc expose is successful and there is an RGW route. Example output: Verification To verify the ENDPOINT , run the following command: Replace <ENDPOINT> with the route that you get from the command in step 3. For example: Important To get the access key and secret of the default user ocs-storagecluster-cephobjectstoreuser , run the following commands: Access key: Secret key:
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "oc describe noobaa -n openshift-storage", "Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa status -n openshift-storage", "INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] ✅ Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] ✅ Exists: Namespace \"openshift-storage\" INFO[0004] ✅ Exists: ServiceAccount \"noobaa\" INFO[0005] ✅ Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] ✅ Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] ✅ Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] ✅ Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] ✅ Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] ✅ Exists: NooBaa \"noobaa\" INFO[0007] ✅ Exists: StatefulSet \"noobaa-core\" INFO[0007] ✅ Exists: Service \"noobaa-mgmt\" INFO[0008] ✅ Exists: Service \"s3\" INFO[0008] ✅ Exists: Secret \"noobaa-server\" INFO[0008] ✅ Exists: Secret \"noobaa-operator\" INFO[0008] ✅ Exists: Secret \"noobaa-admin\" INFO[0009] ✅ Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] ✅ Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] ✅ (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] ✅ (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] ✅ (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] ✅ (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] ✅ (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] ✅ (Optional) Exists: Route \"s3\" INFO[0011] ✅ Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] ✅ System Phase is \"Ready\" INFO[0011] ✅ Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.", "AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls", "oc adm groups new cluster-admins", "oc adm policy add-cluster-role-to-group cluster-admin cluster-admins", "oc adm groups add-users cluster-admins <user-name> <user-name> <user-name>", "oc adm groups remove-users cluster-admins <user-name> <user-name> <user-name>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name>", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name>", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create pv-pool <backingstore_name> --num-volumes=<NUMBER OF VOLUMES> --pv-size-gb=<VOLUME SIZE> --storage-class=<LOCAL STORAGE CLASS>", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> storageClass: <LOCAL STORAGE CLASS> type: pv-pool", "noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint>", "get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"", "apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror", "noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror", "additionalConfig: bucketclass: mirror-to-aws", "{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }", "aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicy", "aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms", "yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <desired-bucket-claim> namespace: <desired-namespace> spec: generateBucketName: <desired-bucket-name> storageClassName: openshift-storage.noobaa.io additionalConfig: replication-policy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]", "noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: <desired-app-label> name: <desired-bucketclass-name> namespace: <desired-namespace> spec: placementPolicy: tiers: - backingstores: - <backingstore> placement: Spread replicationPolicy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io", "apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY", "oc apply -f <yaml.file>", "oc get cm <obc-name> -o yaml", "oc get secret <obc_name> -o yaml", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa obc create <obc-name> -n openshift-storage", "INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"", "oc get obc -n openshift-storage", "NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s", "oc get obc test21obc -o yaml -n openshift-storage", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound", "oc get -n openshift-storage secret test21obc -o yaml", "Example output: apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque", "oc get -n openshift-storage cm test21obc -o yaml", "apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "oc get service -n openshift-storage NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE (...) rook-ceph-rgw-ocs-storagecluster-cephobjectstore ClusterIP 172.30.145.254 <none> 80/TCP,443/TCP 5d7h (...)", "oc expose svc/<RGW service name> --hostname=<route name>", "oc expose svc/rook-ceph-rgw-ocs-storagecluster-cephobjectstore --hostname=rook-ceph-rgw-ocs.ocp.host.example.com", "oc get route -n openshift-storage", "NAME HOST/PORT PATH rook-ceph-rgw-ocs-storagecluster-cephobjectstore rook-ceph-rgw-ocs.ocp.host.example.com SERVICES PORT TERMINATION WILDCARD rook-ceph-rgw-ocs-storagecluster-cephobjectstore http <none>", "aws s3 --no-verify-ssl --endpoint <ENDPOINT> ls", "aws s3 --no-verify-ssl --endpoint http://rook-ceph-rgw-ocs.ocp.host.example.com ls", "oc get secret rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser -n openshift-storage -o yaml | grep -w \"AccessKey:\" | head -n1 | awk '{print USD2}' | base64 --decode", "oc get secret rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser -n openshift-storage -o yaml | grep -w \"SecretKey:\" | head -n1 | awk '{print USD2}' | base64 --decode" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html-single/managing_hybrid_and_multicloud_resources/creating-an-ibm-cos-cache-bucket_rhocs
Chapter 71. subnet
Chapter 71. subnet This chapter describes the commands under the subnet command. 71.1. subnet create Create a subnet Usage: Table 71.1. Positional arguments Value Summary <name> New subnet name Table 71.2. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --subnet-pool <subnet-pool> Subnet pool from which this subnet will obtain a cidr (Name or ID) --use-prefix-delegation USE_PREFIX_DELEGATION Use prefix-delegation if ip is ipv6 format and ip would be delegated externally --use-default-subnet-pool Use default subnet pool for --ip-version --prefix-length <prefix-length> Prefix length for subnet allocation from subnet pool --subnet-range <subnet-range> Subnet range in cidr notation (required if --subnet- pool is not specified, optional otherwise) --dhcp Enable dhcp (default) --no-dhcp Disable dhcp --dns-publish-fixed-ip Enable publishing fixed ips in dns --no-dns-publish-fixed-ip Disable publishing fixed ips in dns (default) --gateway <gateway> Specify a gateway for the subnet. the three options are: <ip-address>: Specific IP address to use as the gateway, auto : Gateway address should automatically be chosen from within the subnet itself, none : This subnet will not use a gateway, e.g.: --gateway 192.168.9.1, --gateway auto, --gateway none (default is auto ). --ip-version {4,6} Ip version (default is 4). note that when subnet pool is specified, IP version is determined from the subnet pool and this option is ignored. --ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac} Ipv6 ra (router advertisement) mode, valid modes: [dhcpv6-stateful, dhcpv6-stateless, slaac] --ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac} Ipv6 address mode, valid modes: [dhcpv6-stateful, dhcpv6-stateless, slaac] --network-segment <network-segment> Network segment to associate with this subnet (name or ID) --network <network> Network this subnet belongs to (name or id) --description <description> Set subnet description --allocation-pool start=<ip-address>,end=<ip-address> Allocation pool ip addresses for this subnet e.g.: start=192.168.199.2,end=192.168.199.254 (repeat option to add multiple IP addresses) --dns-nameserver <dns-nameserver> Dns server for this subnet (repeat option to set multiple DNS servers) --host-route destination=<subnet>,gateway=<ip-address> Additional route for this subnet e.g.: destination=10.10.0.0/16,gateway=192.168.71.254 destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to add multiple routes) --service-type <service-type> Service type for this subnet e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to set multiple service types) --tag <tag> Tag to be added to the subnet (repeat option to set multiple tags) --no-tag No tags associated with the subnet Table 71.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 71.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.2. subnet delete Delete subnet(s) Usage: Table 71.7. Positional arguments Value Summary <subnet> Subnet(s) to delete (name or id) Table 71.8. Command arguments Value Summary -h, --help Show this help message and exit 71.3. subnet list List subnets Usage: Table 71.9. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --ip-version <ip-version> List only subnets of given ip version in output. Allowed values for IP version are 4 and 6. --dhcp List subnets which have dhcp enabled --no-dhcp List subnets which have dhcp disabled --service-type <service-type> List only subnets of a given service type in output e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to list multiple service types) --project <project> List only subnets which belong to a given project in output (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --network <network> List only subnets which belong to a given network in output (name or ID) --gateway <gateway> List only subnets of given gateway ip in output --name <name> List only subnets of given name in output --subnet-range <subnet-range> List only subnets of given subnet range (in cidr notation) in output e.g.: --subnet-range 10.10.0.0/16 --subnet-pool <subnet-pool> List only subnets which belong to a given subnet pool in output (Name or ID) --tags <tag>[,<tag>,... ] List subnets which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List subnets which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude subnets which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude subnets which have any given tag(s) (comma- separated list of tags) Table 71.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 71.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 71.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.4. subnet pool create Create subnet pool Usage: Table 71.14. Positional arguments Value Summary <name> Name of the new subnet pool Table 71.15. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --pool-prefix <pool-prefix> Set subnet pool prefixes (in cidr notation) (repeat option to set multiple prefixes) --default-prefix-length <default-prefix-length> Set subnet pool default prefix length --min-prefix-length <min-prefix-length> Set subnet pool minimum prefix length --max-prefix-length <max-prefix-length> Set subnet pool maximum prefix length --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --address-scope <address-scope> Set address scope associated with the subnet pool (name or ID), prefixes must be unique across address scopes --default Set this as a default subnet pool --no-default Set this as a non-default subnet pool --share Set this subnet pool as shared --no-share Set this subnet pool as not shared --description <description> Set subnet pool description --default-quota <num-ip-addresses> Set default per-project quota for this subnet pool as the number of IP addresses that can be allocated from the subnet pool --tag <tag> Tag to be added to the subnet pool (repeat option to set multiple tags) --no-tag No tags associated with the subnet pool Table 71.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 71.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.5. subnet pool delete Delete subnet pool(s) Usage: Table 71.20. Positional arguments Value Summary <subnet-pool> Subnet pool(s) to delete (name or id) Table 71.21. Command arguments Value Summary -h, --help Show this help message and exit 71.6. subnet pool list List subnet pools Usage: Table 71.22. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --share List subnet pools shared between projects --no-share List subnet pools not shared between projects --default List subnet pools used as the default external subnet pool --no-default List subnet pools not used as the default external subnet pool --project <project> List subnet pools according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --name <name> List only subnet pools of given name in output --address-scope <address-scope> List only subnet pools of given address scope in output (name or ID) --tags <tag>[,<tag>,... ] List subnet pools which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List subnet pools which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude subnet pools which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude subnet pools which have any given tag(s) (Comma-separated list of tags) Table 71.23. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 71.24. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 71.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.26. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.7. subnet pool set Set subnet pool properties Usage: Table 71.27. Positional arguments Value Summary <subnet-pool> Subnet pool to modify (name or id) Table 71.28. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --name <name> Set subnet pool name --pool-prefix <pool-prefix> Set subnet pool prefixes (in cidr notation) (repeat option to set multiple prefixes) --default-prefix-length <default-prefix-length> Set subnet pool default prefix length --min-prefix-length <min-prefix-length> Set subnet pool minimum prefix length --max-prefix-length <max-prefix-length> Set subnet pool maximum prefix length --address-scope <address-scope> Set address scope associated with the subnet pool (name or ID), prefixes must be unique across address scopes --no-address-scope Remove address scope associated with the subnet pool --default Set this as a default subnet pool --no-default Set this as a non-default subnet pool --description <description> Set subnet pool description --default-quota <num-ip-addresses> Set default per-project quota for this subnet pool as the number of IP addresses that can be allocated from the subnet pool --tag <tag> Tag to be added to the subnet pool (repeat option to set multiple tags) --no-tag Clear tags associated with the subnet pool. specify both --tag and --no-tag to overwrite current tags 71.8. subnet pool show Display subnet pool details Usage: Table 71.29. Positional arguments Value Summary <subnet-pool> Subnet pool to display (name or id) Table 71.30. Command arguments Value Summary -h, --help Show this help message and exit Table 71.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 71.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.9. subnet pool unset Unset subnet pool properties Usage: Table 71.35. Positional arguments Value Summary <subnet-pool> Subnet pool to modify (name or id) Table 71.36. Command arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag to be removed from the subnet pool (repeat option to remove multiple tags) --all-tag Clear all tags associated with the subnet pool 71.10. subnet set Set subnet properties Usage: Table 71.37. Positional arguments Value Summary <subnet> Subnet to modify (name or id) Table 71.38. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --name <name> Updated name of the subnet --dhcp Enable dhcp --no-dhcp Disable dhcp --dns-publish-fixed-ip Enable publishing fixed ips in dns --no-dns-publish-fixed-ip Disable publishing fixed ips in dns --gateway <gateway> Specify a gateway for the subnet. the options are: <ip-address>: Specific IP address to use as the gateway, none : This subnet will not use a gateway, e.g.: --gateway 192.168.9.1, --gateway none. --network-segment <network-segment> Network segment to associate with this subnet (name or ID). It is only allowed to set the segment if the current value is None , the network must also have only one segment and only one subnet can exist on the network. --description <description> Set subnet description --tag <tag> Tag to be added to the subnet (repeat option to set multiple tags) --no-tag Clear tags associated with the subnet. specify both --tag and --no-tag to overwrite current tags --allocation-pool start=<ip-address>,end=<ip-address> Allocation pool ip addresses for this subnet e.g.: start=192.168.199.2,end=192.168.199.254 (repeat option to add multiple IP addresses) --no-allocation-pool Clear associated allocation-pools from the subnet. Specify both --allocation-pool and --no-allocation- pool to overwrite the current allocation pool information. --dns-nameserver <dns-nameserver> Dns server for this subnet (repeat option to set multiple DNS servers) --no-dns-nameservers Clear existing information of dns nameservers. specify both --dns-nameserver and --no-dns-nameserver to overwrite the current DNS Nameserver information. --host-route destination=<subnet>,gateway=<ip-address> Additional route for this subnet e.g.: destination=10.10.0.0/16,gateway=192.168.71.254 destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to add multiple routes) --no-host-route Clear associated host-routes from the subnet. specify both --host-route and --no-host-route to overwrite the current host route information. --service-type <service-type> Service type for this subnet e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to set multiple service types) 71.11. subnet show Display subnet details Usage: Table 71.39. Positional arguments Value Summary <subnet> Subnet to display (name or id) Table 71.40. Command arguments Value Summary -h, --help Show this help message and exit Table 71.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 71.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.12. subnet unset Unset subnet properties Usage: Table 71.45. Positional arguments Value Summary <subnet> Subnet to modify (name or id) Table 71.46. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --allocation-pool start=<ip-address>,end=<ip-address> Allocation pool ip addresses to be removed from this subnet e.g.: start=192.168.199.2,end=192.168.199.254 (repeat option to unset multiple allocation pools) --gateway Remove gateway ip from this subnet --dns-nameserver <dns-nameserver> Dns server to be removed from this subnet (repeat option to unset multiple DNS servers) --host-route destination=<subnet>,gateway=<ip-address> Route to be removed from this subnet e.g.: destination=10.10.0.0/16,gateway=192.168.71.254 destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to unset multiple host routes) --service-type <service-type> Service type to be removed from this subnet e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to unset multiple service types) --tag <tag> Tag to be removed from the subnet (repeat option to remove multiple tags) --all-tag Clear all tags associated with the subnet
[ "openstack subnet create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--project <project>] [--project-domain <project-domain>] [--subnet-pool <subnet-pool> | --use-prefix-delegation USE_PREFIX_DELEGATION | --use-default-subnet-pool] [--prefix-length <prefix-length>] [--subnet-range <subnet-range>] [--dhcp | --no-dhcp] [--dns-publish-fixed-ip | --no-dns-publish-fixed-ip] [--gateway <gateway>] [--ip-version {4,6}] [--ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}] [--ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}] [--network-segment <network-segment>] --network <network> [--description <description>] [--allocation-pool start=<ip-address>,end=<ip-address>] [--dns-nameserver <dns-nameserver>] [--host-route destination=<subnet>,gateway=<ip-address>] [--service-type <service-type>] [--tag <tag> | --no-tag] <name>", "openstack subnet delete [-h] <subnet> [<subnet> ...]", "openstack subnet list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--ip-version <ip-version>] [--dhcp | --no-dhcp] [--service-type <service-type>] [--project <project>] [--project-domain <project-domain>] [--network <network>] [--gateway <gateway>] [--name <name>] [--subnet-range <subnet-range>] [--subnet-pool <subnet-pool>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack subnet pool create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] --pool-prefix <pool-prefix> [--default-prefix-length <default-prefix-length>] [--min-prefix-length <min-prefix-length>] [--max-prefix-length <max-prefix-length>] [--project <project>] [--project-domain <project-domain>] [--address-scope <address-scope>] [--default | --no-default] [--share | --no-share] [--description <description>] [--default-quota <num-ip-addresses>] [--tag <tag> | --no-tag] <name>", "openstack subnet pool delete [-h] <subnet-pool> [<subnet-pool> ...]", "openstack subnet pool list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--share | --no-share] [--default | --no-default] [--project <project>] [--project-domain <project-domain>] [--name <name>] [--address-scope <address-scope>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack subnet pool set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--name <name>] [--pool-prefix <pool-prefix>] [--default-prefix-length <default-prefix-length>] [--min-prefix-length <min-prefix-length>] [--max-prefix-length <max-prefix-length>] [--address-scope <address-scope> | --no-address-scope] [--default | --no-default] [--description <description>] [--default-quota <num-ip-addresses>] [--tag <tag>] [--no-tag] <subnet-pool>", "openstack subnet pool show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <subnet-pool>", "openstack subnet pool unset [-h] [--tag <tag> | --all-tag] <subnet-pool>", "openstack subnet set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--name <name>] [--dhcp | --no-dhcp] [--dns-publish-fixed-ip | --no-dns-publish-fixed-ip] [--gateway <gateway>] [--network-segment <network-segment>] [--description <description>] [--tag <tag>] [--no-tag] [--allocation-pool start=<ip-address>,end=<ip-address>] [--no-allocation-pool] [--dns-nameserver <dns-nameserver>] [--no-dns-nameservers] [--host-route destination=<subnet>,gateway=<ip-address>] [--no-host-route] [--service-type <service-type>] <subnet>", "openstack subnet show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <subnet>", "openstack subnet unset [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--allocation-pool start=<ip-address>,end=<ip-address>] [--gateway] [--dns-nameserver <dns-nameserver>] [--host-route destination=<subnet>,gateway=<ip-address>] [--service-type <service-type>] [--tag <tag> | --all-tag] <subnet>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/subnet
Chapter 13. Managing Ansible content
Chapter 13. Managing Ansible content You can import Ansible collections from several sources to Satellite Server. For more information about Ansible integration in Satellite, see Managing configurations by using Ansible integration . 13.1. Synchronizing Ansible Collections On Satellite, you can synchronize your Ansible Collections from Private Automation Hub, console.redhat.com , and other Satellite instances. Ansible Collections will appear on Satellite as a new repository type in the Satellite web UI menu under Content after the sync. Procedure In the Satellite web UI, navigate to Content > Products . Select the required product name. In the Products window, select the name of a product that you want to create a repository for. Click the Repositories tab, and then click New Repository . In the Name field, enter a name for the repository. The Label field is populated automatically based on the name. From the Type list, select ansible collection . In the Upstream URL field, enter the URL for the upstream collections repository. The URL can be any Ansible Galaxy endpoint. For example, https://console.redhat.com/api/automation-hub/ . Optional: In the Requirements.yml field, you can specify the list of collections you want to sync from the endpoint, as well as their versions. If you do not specify the list of collections, everything from the endpoint will be synced. --- collections: - name: my_namespace.my_collection version: 1.2.3 For more information, see Installing roles and collections from the same requirements.yml file in the Galaxy User Guide . Authenticate. To sync Satellite from Private Automation Hub , enter your token in the Auth Token field. For more information, see Connect Private Automation Hub in Connect to Hub . To sync Satellite from console.redhat.com , enter your token in the Auth Token field and enter your SSO URL in the the Auth URL field. For more information, see Getting started with automation hub . To sync Satellite from Satellite, leave both authentication fields blank. Click Save . Navigate to the Ansible Collections repository. From the Select Action menu, select Sync Now .
[ "--- collections: - name: my_namespace.my_collection version: 1.2.3" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/managing_ansible_content_content-management
Chapter 3. Alternative provisioning network methods
Chapter 3. Alternative provisioning network methods This section contains information about other methods that you can use to configure the provisioning network to accommodate routed spine-leaf with composable networks. 3.1. VLAN Provisioning network In this example, the director deploys new overcloud nodes through the provisioning network and uses a VLAN tunnel across the L3 topology. For more information, see Figure 3.1, "VLAN provisioning network topology" . If you use a VLAN provisioning network, the director DHCP servers can send DHCPOFFER broadcasts to any leaf. To establish this tunnel, trunk a VLAN between the Top-of-Rack (ToR) leaf switches. In the following diagram, the StorageLeaf networks are presented to the Ceph storage and Compute nodes; the NetworkLeaf represents an example of any network that you want to compose. Figure 3.1. VLAN provisioning network topology 3.2. VXLAN Provisioning network In this example, the director deploys new overcloud nodes through the provisioning network and uses a VXLAN tunnel to span across the layer 3 topology. For more information, see Figure 3.2, "VXLAN provisioning network topology" . If you use a VXLAN provisioning network, the director DHCP servers can send DHCPOFFER broadcasts to any leaf. To establish this tunnel, configure VXLAN endpoints on the Top-of-Rack (ToR) leaf switches. Figure 3.2. VXLAN provisioning network topology
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/spine_leaf_networking/alternative-provisioning-network-methods
Chapter 2. Requirements
Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Red Hat certified hardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core x86_64 CPU. A quad core x86_64 CPU or multiple dual core x86_64 CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. You can access virtual machine consoles using the SPICE, VNC, or RDP (Windows only) protocols. You can install the QXLDOD graphical driver in the guest operating system to improve the functionality of SPICE. SPICE currently supports a maximum resolution of 2560x1600 pixels. Client Operating System SPICE Support Supported QXLDOD drivers are available on Red Hat Enterprise Linux 7.2 and later, and Windows 10. Note SPICE may work with Windows 8 or 8.1 using QXLDOD drivers, but it is neither certified nor tested. 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 8.6. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Find a certified solution . For more information on the requirements and limitations that apply to guests see Red Hat Enterprise Linux Technology Capabilities and Limits and Supported Limits for Red Hat Virtualization . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere SandyBridge IvyBridge Haswell Broadwell Skylake Client Skylake Server Cascadelake Server For each CPU model with security updates, the CPU Type lists a basic type and a secure type. For example: Intel Cascadelake Server Family Secure Intel Cascadelake Server Family The Secure CPU type contains the latest updates. For details, see BZ# 1731395 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. Procedure At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. For cluster levels 4.2 to 4.5, the maximum supported RAM per VM in Red Hat Virtualization Host is 6 TB. For cluster levels 4.6 to 4.7, the maximum supported RAM per VM in Red Hat Virtualization Host is 16 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, use the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 5 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB /var/tmp - 10 GB swap - 1 GB. See What is the recommended swap size for Red Hat platforms? for details. Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 64 GiB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 10 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Each host should have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. All PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 8 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Select a vGPU type and the number of instances that you would like to use with this virtual machine using the Manage vGPU dialog in the Administration Portal Host Devices tab of the virtual machine. vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking requirements 2.3.1. General requirements Red Hat Virtualization requires IPv6 to remain enabled on the physical or virtual machine running the Manager. Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Network range for self-hosted engine deployment The self-hosted engine deployment process temporarily uses a /24 network address under 192.168 . It defaults to 192.168.222.0/24 , and if this address is in use, it tries other /24 addresses under 192.168 until it finds one that is not in use. If it does not find an unused network address in this range, deployment fails. When installing the self-hosted engine using the command line, you can set the deployment script to use an alternate /24 network range with the option --ansible-extra-vars=he_ipv4_subnet_prefix= PREFIX , where PREFIX is the prefix for the default range. For example: # hosted-engine --deploy --ansible-extra-vars=he_ipv4_subnet_prefix=192.168.222 Note You can only set another range by installing Red Hat Virtualization as a self-hosted engine using the command line. 2.3.3. Firewall Requirements for DNS, NTP, and IPMI Fencing The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Use DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. 2.3.4. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.3. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager ( ovirt-imageio service) Required for communication with the ovirt-imageo service. Yes M8 6642 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see RHV: How to customize the Host's firewall rules? . Note A diagram of these firewall requirements is available at Red Hat Virtualization: Firewall Requirements Diagram . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager ovirt-imageio service Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ovirt-imageo service. Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No H14 123 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NTP Server NTP requests from ports above 1023 to port 123, and responses. This port is required and open by default. H15 4500 TCP, UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H16 500 UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H17 - AH, ESP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.6. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled . 2.3.7. Maximum Transmission Unit Requirements The recommended Maximum Transmission Units (MTU) setting for Hosts during deployment is 1500. It is possible to update this setting after the environment is set up to a different MTU. For more information on changing the MTU setting, see How to change the Hosted Engine VM network MTU .
[ "grep -E 'svm|vmx' /proc/cpuinfo | grep nx", "hosted-engine --deploy --ansible-extra-vars=he_ipv4_subnet_prefix=192.168.222" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/RHV_requirements
15.2. BIND
15.2. BIND This section covers BIND (Berkeley Internet Name Domain), the DNS server included in Red Hat Enterprise Linux. It focuses on the structure of its configuration files, and describes how to administer it both locally and remotely. 15.2.1. Empty Zones BIND configures a number of " empty zones " to prevent recursive servers from sending unnecessary queries to Internet servers that cannot handle them (thus creating delays and SERVFAIL responses to clients who query for them). These empty zones ensure that immediate and authoritative NXDOMAIN responses are returned instead. The configuration option empty-zones-enable controls whether or not empty zones are created, whilst the option disable-empty-zone can be used in addition to disable one or more empty zones from the list of default prefixes that would be used. The number of empty zones created for RFC 1918 prefixes has been increased, and users of BIND 9.9 and above will see the RFC 1918 empty zones both when empty-zones-enable is unspecified (defaults to yes ), and when it is explicitly set to yes . 15.2.2. Configuring the named Service When the named service is started, it reads the configuration from the files as described in Table 15.1, "The named Service Configuration Files" . Table 15.1. The named Service Configuration Files Path Description /etc/named.conf The main configuration file. /etc/named/ An auxiliary directory for configuration files that are included in the main configuration file. The configuration file consists of a collection of statements with nested options surrounded by opening and closing curly brackets ( { and } ). Note that when editing the file, you have to be careful not to make any syntax error, otherwise the named service will not start. A typical /etc/named.conf file is organized as follows: Note If you have installed the bind-chroot package, the BIND service will run in the chroot environment. In that case, the initialization script will mount the above configuration files using the mount --bind command, so that you can manage the configuration outside this environment. There is no need to copy anything into the /var/named/chroot/ directory because it is mounted automatically. This simplifies maintenance since you do not need to take any special care of BIND configuration files if it is run in a chroot environment. You can organize everything as you would with BIND not running in a chroot environment. The following directories are automatically mounted into the /var/named/chroot/ directory if the corresponding mount point directories underneath /var/named/chroot/ are empty: /etc/named /etc/pki/dnssec-keys /run/named /var/named /usr/lib64/bind or /usr/lib/bind (architecture dependent). The following files are also mounted if the target file does not exist in /var/named/chroot/ : /etc/named.conf /etc/rndc.conf /etc/rndc.key /etc/named.rfc1912.zones /etc/named.dnssec.keys /etc/named.iscdlv.key /etc/named.root.key Important Editing files which have been mounted in a chroot environment requires creating a backup copy and then editing the original file. Alternatively, use an editor with " edit-a-copy " mode disabled. For example, to edit the BIND's configuration file, /etc/named.conf , with Vim while it is running in a chroot environment, issue the following command as root : 15.2.2.1. Installing BIND in a chroot Environment To install BIND to run in a chroot environment, issue the following command as root : To enable the named-chroot service, first check if the named service is running by issuing the following command: If it is running, it must be disabled. To disable named , issue the following commands as root : Then, to enable the named-chroot service, issue the following commands as root : To check the status of the named-chroot service, issue the following command as root : 15.2.2.2. Common Statement Types The following types of statements are commonly used in /etc/named.conf : acl The acl (Access Control List) statement allows you to define groups of hosts, so that they can be permitted or denied access to the nameserver. It takes the following form: The acl-name statement name is the name of the access control list, and the match-element option is usually an individual IP address (such as 10.0.1.1 ) or a Classless Inter-Domain Routing ( CIDR ) network notation (for example, 10.0.1.0/24 ). For a list of already defined keywords, see Table 15.2, "Predefined Access Control Lists" . Table 15.2. Predefined Access Control Lists Keyword Description any Matches every IP address. localhost Matches any IP address that is in use by the local system. localnets Matches any IP address on any network to which the local system is connected. none Does not match any IP address. The acl statement can be especially useful in conjunction with other statements such as options . Example 15.2, "Using acl in Conjunction with Options" defines two access control lists, black-hats and red-hats , and adds black-hats on the blacklist while granting red-hats normal access. Example 15.2. Using acl in Conjunction with Options include The include statement allows you to include files in the /etc/named.conf , so that potentially sensitive data can be placed in a separate file with restricted permissions. It takes the following form: The file-name statement name is an absolute path to a file. Example 15.3. Including a File to /etc/named.conf options The options statement allows you to define global server configuration options as well as to set defaults for other statements. It can be used to specify the location of the named working directory, the types of queries allowed, and much more. It takes the following form: For a list of frequently used option directives, see Table 15.3, "Commonly Used Configuration Options" below. Table 15.3. Commonly Used Configuration Options Option Description allow-query Specifies which hosts are allowed to query the nameserver for authoritative resource records. It accepts an access control list, a collection of IP addresses, or networks in the CIDR notation. All hosts are allowed by default. allow-query-cache Specifies which hosts are allowed to query the nameserver for non-authoritative data such as recursive queries. Only localhost and localnets are allowed by default. blackhole Specifies which hosts are not allowed to query the nameserver. This option should be used when a particular host or network floods the server with requests. The default option is none . directory Specifies a working directory for the named service. The default option is /var/named/ . disable-empty-zone Used to disable one or more empty zones from the list of default prefixes that would be used. Can be specified in the options statement and also in view statements. It can be used multiple times. dnssec-enable Specifies whether to return DNSSEC related resource records. The default option is yes . dnssec-validation Specifies whether to prove that resource records are authentic through DNSSEC. The default option is yes . empty-zones-enable Controls whether or not empty zones are created. Can be specified only in the options statement. forwarders Specifies a list of valid IP addresses for nameservers to which the requests should be forwarded for resolution. forward Specifies the behavior of the forwarders directive. It accepts the following options: first - The server will query the nameservers listed in the forwarders directive before attempting to resolve the name on its own. only - When unable to query the nameservers listed in the forwarders directive, the server will not attempt to resolve the name on its own. listen-on Specifies the IPv4 network interface on which to listen for queries. On a DNS server that also acts as a gateway, you can use this option to answer queries originating from a single network only. All IPv4 interfaces are used by default. listen-on-v6 Specifies the IPv6 network interface on which to listen for queries. On a DNS server that also acts as a gateway, you can use this option to answer queries originating from a single network only. All IPv6 interfaces are used by default. max-cache-size Specifies the maximum amount of memory to be used for server caches. When the limit is reached, the server causes records to expire prematurely so that the limit is not exceeded. In a server with multiple views, the limit applies separately to the cache of each view. The default option is 32M . notify Specifies whether to notify the secondary nameservers when a zone is updated. It accepts the following options: yes - The server will notify all secondary nameservers. no - The server will not notify any secondary nameserver. master-only - The server will notify primary server for the zone only. explicit - The server will notify only the secondary servers that are specified in the also-notify list within a zone statement. pid-file Specifies the location of the process ID file created by the named service. recursion Specifies whether to act as a recursive server. The default option is yes . statistics-file Specifies an alternate location for statistics files. The /var/named/named.stats file is used by default. Note The directory used by named for runtime data has been moved from the BIND default location, /var/run/named/ , to a new location /run/named/ . As a result, the PID file has been moved from the default location /var/run/named/named.pid to the new location /run/named/named.pid . In addition, the session-key file has been moved to /run/named/session.key . These locations need to be specified by statements in the options section. See Example 15.4, "Using the options Statement" . Important To prevent distributed denial of service (DDoS) attacks, it is recommended that you use the allow-query-cache option to restrict recursive DNS services for a particular subset of clients only. See the BIND 9 Administrator Reference Manual referenced in Section 15.2.8.1, "Installed Documentation" , and the named.conf manual page for a complete list of available options. Example 15.4. Using the options Statement zone The zone statement allows you to define the characteristics of a zone, such as the location of its configuration file and zone-specific options, and can be used to override the global options statements. It takes the following form: The zone-name attribute is the name of the zone, zone-class is the optional class of the zone, and option is a zone statement option as described in Table 15.4, "Commonly Used Options in Zone Statements" . The zone-name attribute is particularly important, as it is the default value assigned for the USDORIGIN directive used within the corresponding zone file located in the /var/named/ directory. The named daemon appends the name of the zone to any non-fully qualified domain name listed in the zone file. For example, if a zone statement defines the namespace for example.com , use example.com as the zone-name so that it is placed at the end of host names within the example.com zone file. For more information about zone files, see Section 15.2.3, "Editing Zone Files" . Table 15.4. Commonly Used Options in Zone Statements Option Description allow-query Specifies which clients are allowed to request information about this zone. This option overrides global allow-query option. All query requests are allowed by default. allow-transfer Specifies which secondary servers are allowed to request a transfer of the zone's information. All transfer requests are allowed by default. allow-update Specifies which hosts are allowed to dynamically update information in their zone. The default option is to deny all dynamic update requests. Note that you should be careful when allowing hosts to update information about their zone. Do not set IP addresses in this option unless the server is in the trusted network. Instead, use TSIG key as described in Section 15.2.6.3, "Transaction SIGnatures (TSIG)" . file Specifies the name of the file in the named working directory that contains the zone's configuration data. masters Specifies from which IP addresses to request authoritative zone information. This option is used only if the zone is defined as type slave . notify Specifies whether to notify the secondary nameservers when a zone is updated. It accepts the following options: yes - The server will notify all secondary nameservers. no - The server will not notify any secondary nameserver. master-only - The server will notify primary server for the zone only. explicit - The server will notify only the secondary servers that are specified in the also-notify list within a zone statement. type Specifies the zone type. It accepts the following options: delegation-only - Enforces the delegation status of infrastructure zones such as COM, NET, or ORG. Any answer that is received without an explicit or implicit delegation is treated as NXDOMAIN . This option is only applicable in TLDs (Top-Level Domain) or root zone files used in recursive or caching implementations. forward - Forwards all requests for information about this zone to other nameservers. hint - A special type of zone used to point to the root nameservers which resolve queries when a zone is not otherwise known. No configuration beyond the default is necessary with a hint zone. master - Designates the nameserver as authoritative for this zone. A zone should be set as the master if the zone's configuration files reside on the system. slave - Designates the nameserver as a secondary server for this zone. Primary server is specified in the masters directive. Most changes to the /etc/named.conf file of a primary or secondary nameserver involve adding, modifying, or deleting zone statements, and only a small subset of zone statement options is usually needed for a nameserver to work efficiently. In Example 15.5, "A Zone Statement for a Primary nameserver" , the zone is identified as example.com , the type is set to master , and the named service is instructed to read the /var/named/example.com.zone file. It also allows only a secondary nameserver ( 192.168.0.2 ) to transfer the zone. Example 15.5. A Zone Statement for a Primary nameserver A secondary server's zone statement is slightly different. The type is set to slave , and the masters directive is telling named the IP address of the primary server. In Example 15.6, "A Zone Statement for a Secondary nameserver" , the named service is configured to query the primary server at the 192.168.0.1 IP address for information about the example.com zone. The received information is then saved to the /var/named/slaves/example.com.zone file. Note that you have to put all secondary zones in the /var/named/slaves/ directory, otherwise the service will fail to transfer the zone. Example 15.6. A Zone Statement for a Secondary nameserver 15.2.2.3. Other Statement Types The following types of statements are less commonly used in /etc/named.conf : controls The controls statement allows you to configure various security requirements necessary to use the rndc command to administer the named service. See Section 15.2.4, "Using the rndc Utility" for more information on the rndc utility and its usage. key The key statement allows you to define a particular key by name. Keys are used to authenticate various actions, such as secure updates or the use of the rndc command. Two options are used with key : algorithm algorithm-name - The type of algorithm to be used (for example, hmac-md5 ). secret " key-value " - The encrypted key. See Section 15.2.4, "Using the rndc Utility" for more information on the rndc utility and its usage. logging The logging statement allows you to use multiple types of logs, so called channels . By using the channel option within the statement, you can construct a customized type of log with its own file name ( file ), size limit ( size ), version number ( version ), and level of importance ( severity ). Once a customized channel is defined, a category option is used to categorize the channel and begin logging when the named service is restarted. By default, named sends standard messages to the rsyslog daemon, which places them in /var/log/messages . Several standard channels are built into BIND with various severity levels, such as default_syslog (which handles informational logging messages) and default_debug (which specifically handles debugging messages). A default category, called default , uses the built-in channels to do normal logging without any special configuration. Customizing the logging process can be a very detailed process and is beyond the scope of this chapter. For information on creating custom BIND logs, see the BIND 9 Administrator Reference Manual referenced in Section 15.2.8.1, "Installed Documentation" . server The server statement allows you to specify options that affect how the named service should respond to remote nameservers, especially with regard to notifications and zone transfers. The transfer-format option controls the number of resource records that are sent with each message. It can be either one-answer (only one resource record), or many-answers (multiple resource records). Note that while the many-answers option is more efficient, it is not supported by older versions of BIND. trusted-keys The trusted-keys statement allows you to specify assorted public keys used for secure DNS (DNSSEC). See Section 15.2.6.4, "DNS Security Extensions (DNSSEC)" for more information on this topic. view The view statement allows you to create special views depending upon which network the host querying the nameserver is on. This allows some hosts to receive one answer regarding a zone while other hosts receive totally different information. Alternatively, certain zones may only be made available to particular trusted hosts while non-trusted hosts can only make queries for other zones. Multiple views can be used as long as their names are unique. The match-clients option allows you to specify the IP addresses that apply to a particular view. If the options statement is used within a view, it overrides the already configured global options. Finally, most view statements contain multiple zone statements that apply to the match-clients list. Note that the order in which the view statements are listed is important, as the first statement that matches a particular client's IP address is used. For more information on this topic, see Section 15.2.6.1, "Multiple Views" . 15.2.2.4. Comment Tags Additionally to statements, the /etc/named.conf file can also contain comments. Comments are ignored by the named service, but can prove useful when providing additional information to a user. The following are valid comment tags: // Any text after the // characters to the end of the line is considered a comment. For example: # Any text after the # character to the end of the line is considered a comment. For example: /* and */ Any block of text enclosed in /* and */ is considered a comment. For example: 15.2.3. Editing Zone Files As outlined in Section 15.1.1, "Name server Zones" , zone files contain information about a namespace. They are stored in the named working directory located in /var/named/ by default. Each zone file is named according to the file option in the zone statement, usually in a way that relates to the domain in and identifies the file as containing zone data, such as example.com.zone . Table 15.5. The named Service Zone Files Path Description /var/named/ The working directory for the named service. The nameserver is not allowed to write to this directory. /var/named/slaves/ The directory for secondary zones. This directory is writable by the named service. /var/named/dynamic/ The directory for other files, such as dynamic DNS (DDNS) zones or managed DNSSEC keys. This directory is writable by the named service. /var/named/data/ The directory for various statistics and debugging files. This directory is writable by the named service. A zone file consists of directives and resource records. Directives tell the nameserver to perform tasks or apply special settings to the zone, resource records define the parameters of the zone and assign identities to individual hosts. While the directives are optional, the resource records are required in order to provide name service to a zone. All directives and resource records should be entered on individual lines. 15.2.3.1. Common Directives Directives begin with the dollar sign character ( USD ) followed by the name of the directive, and usually appear at the top of the file. The following directives are commonly used in zone files: USDINCLUDE The USDINCLUDE directive allows you to include another file at the place where it appears, so that other zone settings can be stored in a separate zone file. Example 15.7. Using the USDINCLUDE Directive USDORIGIN The USDORIGIN directive allows you to append the domain name to unqualified records, such as those with the host name only. Note that the use of this directive is not necessary if the zone is specified in /etc/named.conf , since the zone name is used by default. In Example 15.8, "Using the USDORIGIN Directive" , any names used in resource records that do not end in a trailing period (the . character) are appended with example.com . Example 15.8. Using the USDORIGIN Directive USDTTL The USDTTL directive allows you to set the default Time to Live (TTL) value for the zone, that is, how long is a zone record valid. Each resource record can contain its own TTL value, which overrides this directive. Increasing this value allows remote nameservers to cache the zone information for a longer period of time, reducing the number of queries for the zone and lengthening the amount of time required to propagate resource record changes. Example 15.9. Using the USDTTL Directive 15.2.3.2. Common Resource Records The following resource records are commonly used in zone files: A The Address record specifies an IP address to be assigned to a name. It takes the following form: If the hostname value is omitted, the record will point to the last specified hostname . In Example 15.10, "Using the A Resource Record" , the requests for server1.example.com are pointed to 10.0.1.3 or 10.0.1.5 . Example 15.10. Using the A Resource Record CNAME The Canonical Name record maps one name to another. Because of this, this type of record is sometimes referred to as an alias record . It takes the following form: CNAME records are most commonly used to point to services that use a common naming scheme, such as www for Web servers. However, there are multiple restrictions for their usage: CNAME records should not point to other CNAME records. This is mainly to avoid possible infinite loops. CNAME records should not contain other resource record types (such as A, NS, MX, and so on). The only exception are DNSSEC related records (RRSIG, NSEC, and so on) when the zone is signed. Other resource records that point to the fully qualified domain name (FQDN) of a host (NS, MX, PTR) should not point to a CNAME record. In Example 15.11, "Using the CNAME Resource Record" , the A record binds a host name to an IP address, while the CNAME record points the commonly used www host name to it. Example 15.11. Using the CNAME Resource Record MX The Mail Exchange record specifies where the mail sent to a particular namespace controlled by this zone should go. It takes the following form: The email-server-name is a fully qualified domain name (FQDN). The preference-value allows numerical ranking of the email servers for a namespace, giving preference to some email systems over others. The MX resource record with the lowest preference-value is preferred over the others. However, multiple email servers can possess the same value to distribute email traffic evenly among them. In Example 15.12, "Using the MX Resource Record" , the first mail.example.com email server is preferred to the mail2.example.com email server when receiving email destined for the example.com domain. Example 15.12. Using the MX Resource Record NS The Nameserver record announces authoritative nameservers for a particular zone. It takes the following form: The nameserver-name should be a fully qualified domain name (FQDN). Note that when two nameservers are listed as authoritative for the domain, it is not important whether these nameservers are secondary nameservers, or if one of them is a primary server. They are both still considered authoritative. Example 15.13. Using the NS Resource Record PTR The Pointer record points to another part of the namespace. It takes the following form: The last-IP-digit directive is the last number in an IP address, and the FQDN-of-system is a fully qualified domain name (FQDN). PTR records are primarily used for reverse name resolution, as they point IP addresses back to a particular name. See Section 15.2.3.4.2, "A Reverse Name Resolution Zone File" for examples of PTR records in use. SOA The Start of Authority record announces important authoritative information about a namespace to the nameserver. Located after the directives, it is the first resource record in a zone file. It takes the following form: The directives are as follows: The @ symbol places the USDORIGIN directive (or the zone's name if the USDORIGIN directive is not set) as the namespace being defined by this SOA resource record. The primary-name-server directive is the host name of the primary nameserver that is authoritative for this domain. The hostmaster-email directive is the email of the person to contact about the namespace. The serial-number directive is a numerical value incremented every time the zone file is altered to indicate it is time for the named service to reload the zone. The time-to-refresh directive is the numerical value secondary nameservers use to determine how long to wait before asking the primary nameserver if any changes have been made to the zone. The time-to-retry directive is a numerical value used by secondary nameservers to determine the length of time to wait before issuing a refresh request in the event that the primary nameserver is not answering. If the primary server has not replied to a refresh request before the amount of time specified in the time-to-expire directive elapses, the secondary servers stop responding as an authority for requests concerning that namespace. In BIND 4 and 8, the minimum-TTL directive is the amount of time other nameservers cache the zone's information. In BIND 9, it defines how long negative answers are cached for. Caching of negative answers can be set to a maximum of 3 hours ( 3H ). When configuring BIND, all times are specified in seconds. However, it is possible to use abbreviations when specifying units of time other than seconds, such as minutes ( M ), hours ( H ), days ( D ), and weeks ( W ). Table 15.6, "Seconds compared to other time units" shows an amount of time in seconds and the equivalent time in another format. Table 15.6. Seconds compared to other time units Seconds Other Time Units 60 1M 1800 30M 3600 1H 10800 3H 21600 6H 43200 12H 86400 1D 259200 3D 604800 1W 31536000 365D Example 15.14. Using the SOA Resource Record 15.2.3.3. Comment Tags Additionally to resource records and directives, a zone file can also contain comments. Comments are ignored by the named service, but can prove useful when providing additional information to the user. Any text after the semicolon character to the end of the line is considered a comment. For example: 15.2.3.4. Example Usage The following examples show the basic usage of zone files. 15.2.3.4.1. A Simple Zone File Example 15.15, "A simple zone file" demonstrates the use of standard directives and SOA values. Example 15.15. A simple zone file In this example, the authoritative nameservers are set as dns1.example.com and dns2.example.com , and are tied to the 10.0.1.1 and 10.0.1.2 IP addresses respectively using the A record. The email servers configured with the MX records point to mail and mail2 through A records. Since these names do not end in a trailing period, the USDORIGIN domain is placed after them, expanding them to mail.example.com and mail2.example.com . Services available at the standard names, such as www.example.com ( WWW ), are pointed at the appropriate servers using the CNAME record. This zone file would be called into service with a zone statement in the /etc/named.conf similar to the following: 15.2.3.4.2. A Reverse Name Resolution Zone File A reverse name resolution zone file is used to translate an IP address in a particular namespace into a fully qualified domain name (FQDN). It looks very similar to a standard zone file, except that the PTR resource records are used to link the IP addresses to a fully qualified domain name as shown in Example 15.16, "A reverse name resolution zone file" . Example 15.16. A reverse name resolution zone file In this example, IP addresses 10.0.1.1 through 10.0.1.6 are pointed to the corresponding fully qualified domain name. This zone file would be called into service with a zone statement in the /etc/named.conf file similar to the following: There is very little difference between this example and a standard zone statement, except for the zone name. Note that a reverse name resolution zone requires the first three blocks of the IP address reversed followed by .in-addr.arpa . This allows the single block of IP numbers used in the reverse name resolution zone file to be associated with the zone. 15.2.4. Using the rndc Utility The rndc utility is a command-line tool that allows you to administer the named service, both locally and from a remote machine. Its usage is as follows: 15.2.4.1. Configuring the Utility To prevent unauthorized access to the service, named must be configured to listen on the selected port ( 953 by default), and an identical key must be used by both the service and the rndc utility. Table 15.7. Relevant files Path Description /etc/named.conf The default configuration file for the named service. /etc/rndc.conf The default configuration file for the rndc utility. /etc/rndc.key The default key location. The rndc configuration is located in /etc/rndc.conf . If the file does not exist, the utility will use the key located in /etc/rndc.key , which was generated automatically during the installation process using the rndc-confgen -a command. The named service is configured using the controls statement in the /etc/named.conf configuration file as described in Section 15.2.2.3, "Other Statement Types" . Unless this statement is present, only the connections from the loopback address ( 127.0.0.1 ) will be allowed, and the key located in /etc/rndc.key will be used. For more information on this topic, see manual pages and the BIND 9 Administrator Reference Manual listed in Section 15.2.8, "Additional Resources" . Important To prevent unprivileged users from sending control commands to the service, make sure only root is allowed to read the /etc/rndc.key file: 15.2.4.2. Checking the Service Status To check the current status of the named service, use the following command: 15.2.4.3. Reloading the Configuration and Zones To reload both the configuration file and zones, type the following at a shell prompt: This will reload the zones while keeping all previously cached responses, so that you can make changes to the zone files without losing all stored name resolutions. To reload a single zone, specify its name after the reload command, for example: Finally, to reload the configuration file and newly added zones only, type: Note If you intend to manually modify a zone that uses Dynamic DNS (DDNS), make sure you run the freeze command first: Once you are finished, run the thaw command to allow the DDNS again and reload the zone: 15.2.4.4. Updating Zone Keys To update the DNSSEC keys and sign the zone, use the sign command. For example: Note that to sign a zone with the above command, the auto-dnssec option has to be set to maintain in the zone statement. For example: 15.2.4.5. Enabling the DNSSEC Validation To enable the DNSSEC validation, issue the following command as root : Similarly, to disable this option, type: See the options statement described in Section 15.2.2.2, "Common Statement Types" for information on how to configure this option in /etc/named.conf . The Red Hat Enterprise Linux 7 Security Guide has a comprehensive section on DNSSEC. 15.2.4.6. Enabling the Query Logging To enable (or disable in case it is currently enabled) the query logging, issue the following command as root : To check the current setting, use the status command as described in Section 15.2.4.2, "Checking the Service Status" . 15.2.5. Using the dig Utility The dig utility is a command-line tool that allows you to perform DNS lookups and debug a nameserver configuration. Its typical usage is as follows: See Section 15.2.3.2, "Common Resource Records" for a list of common values to use for type . 15.2.5.1. Looking Up a Nameserver To look up a nameserver for a particular domain, use the command in the following form: In Example 15.17, "A sample nameserver lookup" , the dig utility is used to display nameservers for example.com . Example 15.17. A sample nameserver lookup 15.2.5.2. Looking Up an IP Address To look up an IP address assigned to a particular domain, use the command in the following form: In Example 15.18, "A sample IP address lookup" , the dig utility is used to display the IP address of example.com . Example 15.18. A sample IP address lookup 15.2.5.3. Looking Up a Host Name To look up a host name for a particular IP address, use the command in the following form: In Example 15.19, "A Sample Host Name Lookup" , the dig utility is used to display the host name assigned to 192.0.32.10 . Example 15.19. A Sample Host Name Lookup 15.2.6. Advanced Features of BIND Most BIND implementations only use the named service to provide name resolution services or to act as an authority for a particular domain. However, BIND version 9 has a number of advanced features that allow for a more secure and efficient DNS service. Important Before attempting to use advanced features like DNSSEC, TSIG, or IXFR (Incremental Zone Transfer), make sure that the particular feature is supported by all nameservers in the network environment, especially when you use older versions of BIND or non-BIND servers. All of the features mentioned are discussed in greater detail in the BIND 9 Administrator Reference Manual referenced in Section 15.2.8.1, "Installed Documentation" . 15.2.6.1. Multiple Views Optionally, different information can be presented to a client depending on the network a request originates from. This is primarily used to deny sensitive DNS entries from clients outside of the local network, while allowing queries from clients inside the local network. To configure multiple views, add the view statement to the /etc/named.conf configuration file. Use the match-clients option to match IP addresses or entire networks and give them special options and zone data. 15.2.6.2. Incremental Zone Transfers (IXFR) Incremental Zone Transfers ( IXFR ) allow a secondary nameserver to only download the updated portions of a zone modified on a primary nameserver. Compared to the standard transfer process, this makes the notification and update process much more efficient. Note that IXFR is only available when using dynamic updating to make changes to primary zone records. If manually editing zone files to make changes, Automatic Zone Transfer ( AXFR ) is used. 15.2.6.3. Transaction SIGnatures (TSIG) Transaction SIGnatures (TSIG) ensure that a shared secret key exists on both primary and secondary nameservers before allowing a transfer. This strengthens the standard IP address-based method of transfer authorization, since attackers would not only need to have access to the IP address to transfer the zone, but they would also need to know the secret key. Since version 9, BIND also supports TKEY , which is another shared secret key method of authorizing zone transfers. Important When communicating over an insecure network, do not rely on IP address-based authentication only. 15.2.6.4. DNS Security Extensions (DNSSEC) Domain Name System Security Extensions ( DNSSEC ) provide origin authentication of DNS data, authenticated denial of existence, and data integrity. When a particular domain is marked as secure, the SERVFAIL response is returned for each resource record that fails the validation. Note that to debug a DNSSEC-signed domain or a DNSSEC-aware resolver, you can use the dig utility as described in Section 15.2.5, "Using the dig Utility" . Useful options are +dnssec (requests DNSSEC-related resource records by setting the DNSSEC OK bit), +cd (tells recursive nameserver not to validate the response), and +bufsize=512 (changes the packet size to 512B to get through some firewalls). 15.2.6.5. Internet Protocol version 6 (IPv6) Internet Protocol version 6 ( IPv6 ) is supported through the use of AAAA resource records, and the listen-on-v6 directive as described in Table 15.3, "Commonly Used Configuration Options" . 15.2.7. Common Mistakes to Avoid The following is a list of recommendations on how to avoid common mistakes users make when configuring a nameserver: Use semicolons and curly brackets correctly An omitted semicolon or unmatched curly bracket in the /etc/named.conf file can prevent the named service from starting. Use period (the . character) correctly In zone files, a period at the end of a domain name denotes a fully qualified domain name. If omitted, the named service will append the name of the zone or the value of USDORIGIN to complete it. Increment the serial number when editing a zone file If the serial number is not incremented, the primary nameserver will have the correct, new information, but the secondary nameservers will never be notified of the change, and will not attempt to refresh their data of that zone. Configure the firewall If a firewall is blocking connections from the named service to other nameservers, the recommended practice is to change the firewall settings. Warning Using a fixed UDP source port for DNS queries is a potential security vulnerability that could allow an attacker to conduct cache-poisoning attacks more easily. To prevent this, by default DNS sends from a random ephemeral port. Configure your firewall to allow outgoing queries from a random UDP source port. The range 1024 to 65535 is used by default. 15.2.8. Additional Resources The following sources of information provide additional resources regarding BIND. 15.2.8.1. Installed Documentation BIND features a full range of installed documentation covering many different topics, each placed in its own subject directory. For each item below, replace version with the version of the bind package installed on the system: /usr/share/doc/bind- version / The main directory containing the most recent documentation. The directory contains the BIND 9 Administrator Reference Manual in HTML and PDF formats, which details BIND resource requirements, how to configure different types of nameservers, how to perform load balancing, and other advanced topics. /usr/share/doc/bind- version /sample/etc/ The directory containing examples of named configuration files. rndc(8) The manual page for the rndc name server control utility, containing documentation on its usage. named(8) The manual page for the Internet domain name server named , containing documentation on assorted arguments that can be used to control the BIND nameserver daemon. lwresd(8) The manual page for the lightweight resolver daemon lwresd , containing documentation on the daemon and its usage. named.conf(5) The manual page with a comprehensive list of options available within the named configuration file. rndc.conf(5) The manual page with a comprehensive list of options available within the rndc configuration file. 15.2.8.2. Online Resources https://access.redhat.com/site/articles/770133 A Red Hat Knowledgebase article about running BIND in a chroot environment, including the differences compared to Red Hat Enterprise Linux 6. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/ The Red Hat Enterprise Linux 7 Security Guide has a comprehensive section on DNSSEC. https://www.icann.org/namecollision The ICANN FAQ on domain name collision .
[ "statement-1 [\" statement-1-name \"] [ statement-1-class ] { option-1 ; option-2 ; option-N ; }; statement-2 [\" statement-2-name \"] [ statement-2-class ] { option-1 ; option-2 ; option-N ; }; statement-N [\" statement-N-name \"] [ statement-N-class ] { option-1 ; option-2 ; option-N ; };", "~]# vim -c \"set backupcopy=yes\" /etc/named.conf", "~]# yum install bind-chroot", "~]USD systemctl status named", "~]# systemctl stop named", "~]# systemctl disable named", "~]# systemctl enable named-chroot", "~]# systemctl start named-chroot", "~]# systemctl status named-chroot", "acl acl-name { match-element ; };", "acl black-hats { 10.0.2.0/24; 192.168.0.0/24; 1234:5678::9abc/24; }; acl red-hats { 10.0.1.0/24; }; options { blackhole { black-hats; }; allow-query { red-hats; }; allow-query-cache { red-hats; }; };", "include \" file-name \"", "include \"/etc/named.rfc1912.zones\";", "options { option ; };", "options { allow-query { localhost; }; listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; max-cache-size 256M; directory \"/var/named\"; statistics-file \"/var/named/data/named_stats.txt\"; recursion yes; dnssec-enable yes; dnssec-validation yes; pid-file \"/run/named/named.pid\"; session-keyfile \"/run/named/session.key\"; };", "zone zone-name [ zone-class ] { option ; };", "zone \"example.com\" IN { type master; file \"example.com.zone\"; allow-transfer { 192.168.0.2; }; };", "zone \"example.com\" { type slave; file \"slaves/example.com.zone\"; masters { 192.168.0.1; }; };", "notify yes; // notify all secondary nameservers", "notify yes; # notify all secondary nameservers", "notify yes; /* notify all secondary nameservers */", "USDINCLUDE /var/named/penguin.example.com", "USDORIGIN example.com.", "USDTTL 1D", "hostname IN A IP-address", "server1 IN A 10.0.1.3 IN A 10.0.1.5", "alias-name IN CNAME real-name", "server1 IN A 10.0.1.5 www IN CNAME server1", "IN MX preference-value email-server-name", "example.com. IN MX 10 mail.example.com. IN MX 20 mail2.example.com.", "IN NS nameserver-name", "IN NS dns1.example.com. IN NS dns2.example.com.", "last-IP-digit IN PTR FQDN-of-system", "@ IN SOA primary-name-server hostmaster-email ( serial-number time-to-refresh time-to-retry time-to-expire minimum-TTL )", "@ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day", "604800 ; expire after 1 week", "USDORIGIN example.com. USDTTL 86400 @ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day ; ; IN NS dns1.example.com. IN NS dns2.example.com. dns1 IN A 10.0.1.1 IN AAAA aaaa:bbbb::1 dns2 IN A 10.0.1.2 IN AAAA aaaa:bbbb::2 ; ; @ IN MX 10 mail.example.com. IN MX 20 mail2.example.com. mail IN A 10.0.1.5 IN AAAA aaaa:bbbb::5 mail2 IN A 10.0.1.6 IN AAAA aaaa:bbbb::6 ; ; ; This sample zone file illustrates sharing the same IP addresses ; for multiple services: ; services IN A 10.0.1.10 IN AAAA aaaa:bbbb::10 IN A 10.0.1.11 IN AAAA aaaa:bbbb::11 ftp IN CNAME services.example.com. www IN CNAME services.example.com. ; ;", "zone \"example.com\" IN { type master; file \"example.com.zone\"; allow-update { none; }; };", "USDORIGIN 1.0.10.in-addr.arpa. USDTTL 86400 @ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day ; @ IN NS dns1.example.com. ; 1 IN PTR dns1.example.com. 2 IN PTR dns2.example.com. ; 5 IN PTR server1.example.com. 6 IN PTR server2.example.com. ; 3 IN PTR ftp.example.com. 4 IN PTR ftp.example.com.", "zone \"1.0.10.in-addr.arpa\" IN { type master; file \"example.com.rr.zone\"; allow-update { none; }; };", "rndc [ option ...] command [ command-option ]", "~]# chmod o-rwx /etc/rndc.key", "~]# rndc status version: 9.7.0-P2-RedHat-9.7.0-5.P2.el6 CPUs found: 1 worker threads: 1 number of zones: 16 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running", "~]# rndc reload server reload successful", "~]# rndc reload localhost zone reload up-to-date", "~]# rndc reconfig", "~]# rndc freeze localhost", "~]# rndc thaw localhost The zone reload and thaw was successful.", "~]# rndc sign localhost", "zone \"localhost\" IN { type master; file \"named.localhost\"; allow-update { none; }; auto-dnssec maintain; };", "~]# rndc validation on", "~]# rndc validation off", "~]# rndc querylog", "dig [@ server ] [ option ...] name type", "dig name NS", "~]USD dig example.com NS ; <<>> DiG 9.7.1-P2-RedHat-9.7.1-2.P2.fc13 <<>> example.com NS ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57883 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;example.com. IN NS ;; ANSWER SECTION: example.com. 99374 IN NS a.iana-servers.net. example.com. 99374 IN NS b.iana-servers.net. ;; Query time: 1 msec ;; SERVER: 10.34.255.7#53(10.34.255.7) ;; WHEN: Wed Aug 18 18:04:06 2010 ;; MSG SIZE rcvd: 77", "dig name A", "~]USD dig example.com A ; <<>> DiG 9.7.1-P2-RedHat-9.7.1-2.P2.fc13 <<>> example.com A ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4849 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;example.com. IN A ;; ANSWER SECTION: example.com. 155606 IN A 192.0.32.10 ;; AUTHORITY SECTION: example.com. 99175 IN NS a.iana-servers.net. example.com. 99175 IN NS b.iana-servers.net. ;; Query time: 1 msec ;; SERVER: 10.34.255.7#53(10.34.255.7) ;; WHEN: Wed Aug 18 18:07:25 2010 ;; MSG SIZE rcvd: 93", "dig -x address", "~]USD dig -x 192.0.32.10 ; <<>> DiG 9.7.1-P2-RedHat-9.7.1-2.P2.fc13 <<>> -x 192.0.32.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29683 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 6 ;; QUESTION SECTION: ;10.32.0.192.in-addr.arpa. IN PTR ;; ANSWER SECTION: 10.32.0.192.in-addr.arpa. 21600 IN PTR www.example.com. ;; AUTHORITY SECTION: 32.0.192.in-addr.arpa. 21600 IN NS b.iana-servers.org. 32.0.192.in-addr.arpa. 21600 IN NS c.iana-servers.net. 32.0.192.in-addr.arpa. 21600 IN NS d.iana-servers.net. 32.0.192.in-addr.arpa. 21600 IN NS ns.icann.org. 32.0.192.in-addr.arpa. 21600 IN NS a.iana-servers.net. ;; ADDITIONAL SECTION: a.iana-servers.net. 13688 IN A 192.0.34.43 b.iana-servers.org. 5844 IN A 193.0.0.236 b.iana-servers.org. 5844 IN AAAA 2001:610:240:2::c100:ec c.iana-servers.net. 12173 IN A 139.91.1.10 c.iana-servers.net. 12173 IN AAAA 2001:648:2c30::1:10 ns.icann.org. 12884 IN A 192.0.34.126 ;; Query time: 156 msec ;; SERVER: 10.34.255.7#53(10.34.255.7) ;; WHEN: Wed Aug 18 18:25:15 2010 ;; MSG SIZE rcvd: 310" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-bind
Chapter 6. View OpenShift Data Foundation Topology
Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_on_vmware_vsphere/viewing-odf-topology_rhodf
Release notes for Eclipse Temurin 17.0.10
Release notes for Eclipse Temurin 17.0.10 Red Hat build of OpenJDK 17 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.10/index
Chapter 13. Booting the installation media
Chapter 13. Booting the installation media After you have created bootable media you are ready to boot the Red Hat Enterprise Linux installation. You can register RHEL using the Red Hat Content Delivery Network (CDN). CDN is a geographically distributed series of web servers. These servers provide, for example, packages and updates to RHEL hosts with a valid subscription. During the installation, registering and installing RHEL from the CDN offers following benefits: Utilizing the latest packages for an up-to-date system immediately after installation and Integrated support for connecting to Red Hat Insights and enabling System Purpose. 13.1. Booting the installation from a network using HTTP When installing Red Hat Enterprise Linux on a large number of systems simultaneously, the best approach is to boot and install from a server on the local network. Follow the steps in this procedure to boot the Red Hat Enterprise Linux installation from a network using HTTP. Important To boot the installation process from a network, you must use a physical network connection, for example, Ethernet. You cannot boot the installation process with a wireless connection. Prerequisites You have configured an HTTP boot server, and there is a network interface in your system. See Additional resources for more information. You have configured your system to boot from the network interface. This option is in the UEFI, and can be labeled Network Boot or Boot Services . You have verified that the UEFI is configured to boot from the specified network interface and supports the HTTP boot standard. For more information, see your hardware's documentation. Your platform is x86_64 or you install in KVM. Procedure Verify that the network cable is attached. The link indicator light on the network socket should be lit, even if the computer is not switched on. Turn on the system. Depending on your hardware, some network setup and diagnostic information can be displayed before your system connects to an HTTP boot server. When connected, a menu is displayed according to the HTTP boot server configuration. Press the number key that corresponds to the option that you require. Note In some instances, boot options are not displayed. If this occurs, press the Enter key on your keyboard or wait until the boot window opens. The Red Hat Enterprise Linux boot window opens and displays information about a variety of available boot options. Use the arrow keys on your keyboard to select the boot option that you require, and press Enter to select the boot option. The Welcome to Red Hat Enterprise Linux window opens and you can install Red Hat Enterprise Linux using the graphical user interface. The installation program automatically begins if no action is performed in the boot window within 60 seconds. Optional: Edit the available boot options. Press E to enter edit mode. Change the predefined command line to add or remove boot options. Press Enter to confirm your choice. Additional Resources Automatically installing RHEL Boot options reference 13.2. Booting the installation from a network using PXE When installing Red Hat Enterprise Linux on a large number of systems simultaneously, the best approach is to boot and install from a server on the local network. Follow the steps in this procedure to boot the Red Hat Enterprise Linux installation from a network using PXE. Important To boot the installation process from a network, you must use a physical network connection, for example, Ethernet. You cannot boot the installation process with a wireless connection. Prerequisites You have configured a TFTP server, and there is a network interface in your system that supports PXE. See Additional resources for more information. You have configured your system to boot from the network interface. This option is in the BIOS, and can be labeled Network Boot or Boot Services . You have verified that the BIOS is configured to boot from the specified network interface and supports the PXE standard. For more information, see your hardware's documentation. Your platform is x86_64 or you install in KVM. Note On the ARM platform on KVM, booting in the Preboot Execution Environment (PXE) is functional but not supported, Red Hat strongly discourages using it in the production environments. If you require PXE booting, it is only possible with the virtio-net-pci network interface controller (NIC). Procedure Verify that the network cable is attached. The link indicator light on the network socket should be lit, even if the computer is not switched on. Switch on the system. Depending on your hardware, some network setup and diagnostic information can be displayed before your system connects to a PXE server. When connected, a menu is displayed according to the PXE server configuration. Press the number key that corresponds to the option that you require. Note In some instances, boot options are not displayed. If this occurs, press the Enter key on your keyboard or wait until the boot window opens. The Red Hat Enterprise Linux boot window opens and displays information about a variety of available boot options. Use the arrow keys on your keyboard to select the boot option that you require, and press Enter to select the boot option. The Welcome to Red Hat Enterprise Linux window opens and you can install Red Hat Enterprise Linux using the graphical user interface. The installation program automatically begins if no action is performed in the boot window within 60 seconds. Optional: Edit the available boot options: UEFI-based systems Press E to enter edit mode. Change the predefined command line to add or remove boot options. Press Enter to confirm your choice. BIOS-based systems Press the Tab key on your keyboard to enter edit mode. Change the predefined command line to add or remove boot options. Press Enter to confirm your choice. Additional Resources * Automatically installing RHEL Boot options reference 13.3. Booting the installation on IBM Z to install RHEL in an LPAR 13.3.1. Booting the RHEL installation from an SFTP, FTPS, or FTP server to install in an IBM Z LPAR You can install RHEL into an LPAR by using an SFTP, FTPS, or FTP server. Procedure Log in on the IBM Z Hardware Management Console (HMC) or the Support Element (SE) as a user with sufficient privileges to install a new operating system to an LPAR. On the Systems tab, select the mainframe you want to work with, then on the Partitions tab select the LPAR to which you wish to install. At the bottom of the screen, under Daily , find Operating System Messages . Double-click Operating System Messages to show the text console on which Linux boot messages will appear. Double-click Load from Removable Media or Server . In the dialog box that follows, select SFTP/FTPS/FTP Server , and enter the following information: Host Computer - Host name or IP address of the FTP server you want to install from, for example ftp.redhat.com User ID - Your user name on the FTP server. Or, specify anonymous. Password - Your password. Use your email address if you are logging in as anonymous. File location (optional) - Directory on the FTP server holding the Red Hat Enterprise Linux for IBM Z, for example /rhel/s390x/ . Click Continue . In the dialog that follows, keep the default selection of generic.ins and click Continue . 13.3.2. Booting the RHEL installation from a prepared DASD to install in an IBM Z LPAR Use this procedure when installing Red Hat Enterprise Linux into an LPAR using an already prepared DASD. Procedure Log in on the IBM Z Hardware Management Console (HMC) or the Support Element (SE) as a user with sufficient privileges to install a new operating system to an LPAR. On the Systems tab, select the mainframe you want to work with, then on the Partitions tab select the LPAR to which you wish to install. At the bottom of the screen, under Daily , find Operating System Messages . Double-click Operating System Messages to show the text console on which Linux boot messages will appear. Double-click Load . In the dialog box that follows, select Normal as the Load type . As Load address , fill in the device number of the DASD. Click the OK button. 13.3.3. Booting the RHEL installation from an FCP-attached SCSI disk to install in an IBM Z LPAR Use this procedure when installing Red Hat Enterprise Linux into an LPAR using an already prepared FCP attached SCSI disk. Procedure Log in on the IBM Z Hardware Management Console (HMC) or the Support Element (SE) as a user with sufficient privileges to install a new operating system to an LPAR. On the Systems tab, select the mainframe you want to work with, then on the Partitions tab select the LPAR to which you wish to install. At the bottom of the screen, under Daily , find Operating System Messages . Double-click Operating System Messages to show the text console on which Linux boot messages will appear. Double-click Load . In the dialog box that follows, select SCSI as the Load type . As Load address , fill in the device number of the FCP channel connected with the SCSI disk. As World wide port name , fill in the WWPN of the storage system containing the disk as a 16-digit hexadecimal number. As Logical unit number , fill in the LUN of the disk as a 16-digit hexadecimal number. Leave the Boot record logical block address as 0 and the Operating system specific load parameters empty. Click the OK button. 13.4. Booting the installation on IBM Z to install RHEL in z/VM When installing under z/VM, you can boot from: The z/VM virtual reader A DASD or an FCP-attached SCSI disk prepared with the zipl boot loader 13.4.1. Booting the RHEL installation by using the z/VM Reader You can boot from the z/VM reader. Procedure If necessary, add the device containing the z/VM TCP/IP tools to your CMS disk list. For example: Replace fm with any FILEMODE letter. For a connection to an FTPS server, enter: Where host is the host name or IP address of the FTP server that hosts the boot images ( kernel.img and initrd.img ). Log in and execute the following commands. Use the (repl option if you are overwriting existing kernel.img , initrd.img , generic.prm , or redhat.exec files: Optional: Check whether the files were transferred correctly by using the CMS command filelist to show the received files and their format. It is important that kernel.img and initrd.img have a fixed record length format denoted by F in the Format column and a record length of 80 in the Lrecl column. For example: Press PF3 to quit filelist and return to the CMS prompt. Customize boot parameters in generic.prm as necessary. For details, see Customizing boot parameters . Another way to configure storage and network devices is by using a CMS configuration file. In such a case, add the CMSDASD= and CMSCONFFILE= parameters to generic.prm . Finally, execute the REXX script redhat.exec to boot the installation program: 13.4.2. Booting the RHEL installation by using a prepared DASD Perform the following steps to use a Prepared DASD: Procedure Boot from the prepared DASD and select the zipl boot menu entry referring to the Red Hat Enterprise Linux installation program. Use a command of the following form: Replace DASD_device_number with the device number of the boot device, and boot_entry_number with the zipl configuration menu for this device. For example: 13.4.3. Booting the RHEL installation by using a prepared FCP attached SCSI Disk Perform the following steps to boot from a prepared FCP-attached SCSI disk: Procedure Configure the SCSI boot loader of z/VM to access the prepared SCSI disk in the FCP Storage Area Network. Select the prepared zipl boot menu entry referring to the Red Hat Enterprise Linux installation program. Use a command of the following form: Replace WWPN with the World Wide Port Name of the storage system and LUN with the Logical Unit Number of the disk. The 16-digit hexadecimal numbers must be split into two pairs of eight digits each. For example: Optional: Confirm your settings with the command: Boot the FCP device connected with the storage system containing the disk with the following command: For example:
[ "cp link tcpmaint 592 592 acc 592 fm", "ftp <host> (secure", "cd / location/of/install-tree /images/ ascii get generic.prm (repl get redhat.exec (repl locsite fix 80 binary get kernel.img (repl get initrd.img (repl quit", "VMUSER FILELIST A0 V 169 Trunc=169 Size=6 Line=1 Col=1 Alt=0 Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time REDHAT EXEC B1 V 22 1 1 4/15/10 9:30:40 GENERIC PRM B1 V 44 1 1 4/15/10 9:30:32 INITRD IMG B1 F 80 118545 2316 4/15/10 9:30:25 KERNEL IMG B1 F 80 74541 912 4/15/10 9:30:17", "redhat", "cp ipl DASD_device_number loadparm boot_entry_number", "cp ipl eb1c loadparm 0", "cp set loaddev portname WWPN lun LUN bootprog boot_entry_number", "cp set loaddev portname 50050763 050b073d lun 40204011 00000000 bootprog 0", "query loaddev", "cp ipl FCP_device", "cp ipl fc00" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/booting-the-installation-media_rhel-installer
Chapter 9. Red Hat build of Kogito on Red Hat OpenShift Container Platform
Chapter 9. Red Hat build of Kogito on Red Hat OpenShift Container Platform You can deploy Red Hat build of Kogito microservices on Red Hat OpenShift Container Platform for cloud implementation. In this architecture, Red Hat build of Kogito microservices are deployed as OpenShift pods that you can scale up and down individually to provide as few or as many containers as required for a particular service. To help you deploy your Red Hat build of Kogito microservices on OpenShift, Red Hat Process Automation Manager provides Red Hat Process Automation Manager Kogito Operator . This operator guides you through the deployment process. The operator is based on the Operator SDK and automates many of the deployment steps for you. For example, when you provide the operator with a link to the Git repository that contains your application, the operator automatically configures the components required to build your project from source and deploys the resulting services. To install the Red Hat Process Automation Manager Kogito Operator in OpenShift web console, go to Operators OperatorHub in the left menu, search for and select RHPAM Kogito Operator , and follow the on-screen instructions to install the latest operator version.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_process_automation_manager/con-kogito-microservices-on-ocp_deploying-kogito-microservices-on-openshift
Chapter 2. Installation
Chapter 2. Installation This chapter guides you through the steps to install AMQ Spring Boot Starter in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To build programs with AMQ Spring Boot Starter, you must install Apache Maven . To use AMQ Spring Boot Starter, you must install Java. 2.2. Using the Red Hat Maven repository Configure your Maven environment to download the client library from the Red Hat Maven repository. Procedure Add the Red Hat repository to your Maven settings or POM file. For example configuration files, see Section B.1, "Using the online repository" . <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> Add the library dependency to your POM file. <dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>3.1.2.redhat-00001</version> </dependency> The client is now available in your Maven project. 2.3. Installing a local Maven repository As an alternative to the online repository, AMQ Spring Boot Starter can be installed to your local filesystem as a file-based Maven repository. Procedure Use your subscription to download the AMQ Spring Boot Starter 3.1.2 Maven repository .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-spring-boot-starter-3.1.2-maven-repository-maven-repository.zip On Windows, right-click the .zip file and select Extract All . Configure Maven to use the repository in the maven-repository directory inside the extracted install directory. For more information, see Section B.2, "Using a local repository" .
[ "<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>", "<dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>3.1.2.redhat-00001</version> </dependency>", "unzip amq-spring-boot-starter-3.1.2-maven-repository-maven-repository.zip" ]
https://docs.redhat.com/en/documentation/amq_spring_boot_starter/3.0/html/using_the_amq_spring_boot_starter/installation
5.22. cdrkit
5.22. cdrkit 5.22.1. RHBA-2012:1451 - cdrkit bug fix update Updated cdrkit packages that fix one bug are now available for Red Hat Enterprise Linux 6. The cdrkit packages contain a collection of CD/DVD utilities for generating the ISO9660 file-system and burning media. Bug Fix BZ# 797990 Prior to this update, overlapping memory was handled incorrectly. As a consequence, newly created paths could be garbled when calling "genisoimage" with the "-graft-points" option to graft the paths at points other than the root directory. This update modifies the underlying code to generate graft paths as expected. All users of cdrkit are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/cdrkit
Uninstall
Uninstall builds for Red Hat OpenShift 1.3 Uninstalling Builds Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.3/html-single/uninstall/index
Red Hat Single Sign-On for OpenShift
Red Hat Single Sign-On for OpenShift Red Hat Single Sign-On 7.6 For Use with Red Hat Single Sign-On 7.6 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/red_hat_single_sign-on_for_openshift/index
Chapter 2. Configuring the monitoring stack
Chapter 2. Configuring the monitoring stack The OpenShift Container Platform installation program provides only a low number of configuration options before installation. Configuring most OpenShift Container Platform framework components, including the cluster monitoring stack, happens after the installation. This section explains what configuration is supported, shows how to configure the monitoring stack, and demonstrates several common configuration scenarios. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration. 2.1. Prerequisites The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources. 2.2. Maintenance and support for monitoring Not all configuration options for the monitoring stack are exposed. The only supported way of configuring OpenShift Container Platform monitoring is by configuring the Cluster Monitoring Operator (CMO) using the options described in the Config map reference for the Cluster Monitoring Operator . Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in the Config map reference for the Cluster Monitoring Operator , your changes will disappear because the CMO automatically reconciles any differences and resets any unsupported changes back to the originally defined state by default and by design. Important Installing another Prometheus instance is not supported by the Red Hat Site Reliability Engineers (SRE). 2.2.1. Support considerations for monitoring Note Backward compatibility for metrics, recording rules, or alerting rules is not guaranteed. The following modifications are explicitly not supported: Creating additional ServiceMonitor , PodMonitor , and PrometheusRule objects in the openshift-* and kube-* projects. Modifying any resources or objects deployed in the openshift-monitoring or openshift-user-workload-monitoring projects. The resources created by the OpenShift Container Platform monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility. Note The Alertmanager configuration is deployed as the alertmanager-main secret resource in the openshift-monitoring namespace. If you have enabled a separate Alertmanager instance for user-defined alert routing, an Alertmanager configuration is also deployed as the alertmanager-user-workload secret resource in the openshift-user-workload-monitoring namespace. To configure additional routes for any instance of Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement. Modifying resources of the stack. The OpenShift Container Platform monitoring stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them. Deploying user-defined workloads to openshift-* , and kube-* projects. These projects are reserved for Red Hat provided components and they should not be used for user-defined workloads. Manually deploying monitoring resources into namespaces that have the openshift.io/cluster-monitoring: "true" label. Adding the openshift.io/cluster-monitoring: "true" label to namespaces. This label is reserved only for the namespaces with core OpenShift Container Platform components and Red Hat certified components. Installing custom Prometheus instances on OpenShift Container Platform. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator. Enabling symptom based monitoring by using the Probe custom resource definition (CRD) in Prometheus Operator. Installing custom Prometheus instances on OpenShift Container Platform. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator. Modifying the default platform monitoring components. You should not modify any of the components defined in the cluster-monitoring-config config map. Red Hat SRE uses these components to monitor the core cluster components and Kubernetes services. 2.2.2. Support policy for monitoring Operators Monitoring Operators ensure that OpenShift Container Platform monitoring resources function as designed and tested. If Cluster Version Operator (CVO) control of an Operator is overridden, the Operator does not respond to configuration changes, reconcile the intended state of cluster objects, or receive updates. While overriding CVO control for an Operator can be helpful during debugging, this is unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. Overriding the Cluster Version Operator The spec.overrides parameter can be added to the configuration for the CVO to allow administrators to provide a list of overrides to the behavior of the CVO for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state and prevents the monitoring stack from being reconciled to its intended state. This impacts the reliability features built into Operators and prevents updates from being received. Reported issues must be reproduced after removing any overrides for support to proceed. 2.2.3. Support version matrix for monitoring components The following matrix contains information about versions of monitoring components for OpenShift Container Platform 4.10 and later releases: Table 2.1. OpenShift Container Platform and component versions OpenShift Container Platform Prometheus Operator Prometheus Prometheus Adapter Alertmanager kube-state-metrics agent node-exporter agent Thanos 4.12 0.60.1 2.39.1 0.10.0 0.24.0 2.6.0 1.4.0 0.28.1 4.11 0.57.0 2.36.2 0.9.1 0.24.0 2.5.0 1.3.1 0.26.0 4.10 0.53.1 2.32.1 0.9.1 0.23.0 2.3.0 1.3.1 0.23.2 Note The openshift-state-metrics agent and Telemeter Client are OpenShift-specific components. Therefore, their versions correspond with the versions of OpenShift Container Platform. 2.3. Preparing to configure the monitoring stack You can configure the monitoring stack by creating and updating monitoring config maps. These config maps configure the Cluster Monitoring Operator (CMO), which in turn configures the components of the monitoring stack. 2.3.1. Creating a cluster monitoring config map You can configure the core OpenShift Container Platform monitoring components by creating the cluster-monitoring-config ConfigMap object in the openshift-monitoring project. The Cluster Monitoring Operator (CMO) then configures the core components of the monitoring stack. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Check whether the cluster-monitoring-config ConfigMap object exists: USD oc -n openshift-monitoring get configmap cluster-monitoring-config If the ConfigMap object does not exist: Create the following YAML manifest. In this example the file is called cluster-monitoring-config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | Apply the configuration to create the ConfigMap object: USD oc apply -f cluster-monitoring-config.yaml 2.3.2. Creating a user-defined workload monitoring config map You can configure the user workload monitoring components with the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. The Cluster Monitoring Operator (CMO) then configures the components that monitor user-defined projects. Note If you enable monitoring for user-defined projects, the user-workload-monitoring-config ConfigMap object is created by default. When you save your changes to the user-workload-monitoring-config ConfigMap object, some or all of the pods in the openshift-user-workload-monitoring project might be redeployed. It can sometimes take a while for these components to redeploy. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Check whether the user-workload-monitoring-config ConfigMap object exists: USD oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config If the user-workload-monitoring-config ConfigMap object does not exist: Create the following YAML manifest. In this example the file is called user-workload-monitoring-config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | Apply the configuration to create the ConfigMap object: USD oc apply -f user-workload-monitoring-config.yaml Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Additional resources Enabling monitoring for user-defined projects 2.4. Granting users permissions for core platform monitoring As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects. You can also grant developers and other users different permissions for core platform monitoring. You can grant the permissions by assigning one of the following monitoring roles or cluster roles: Name Description Project cluster-monitoring-metrics-api Users with this role have the ability to access Thanos Querier API endpoints. Additionally, it grants access to the core platform Prometheus API and user-defined Thanos Ruler API endpoints. openshift-monitoring cluster-monitoring-operator-alert-customization Users with this role can manage AlertingRule and AlertRelabelConfig resources for core platform monitoring. These permissions are required for the alert customization feature. openshift-monitoring monitoring-alertmanager-edit Users with this role can manage the Alertmanager API for core platform monitoring. They can also manage alert silences in the Administrator perspective of the OpenShift Container Platform web console. openshift-monitoring monitoring-alertmanager-view Users with this role can monitor the Alertmanager API for core platform monitoring. They can also view alert silences in the Administrator perspective of the OpenShift Container Platform web console. openshift-monitoring cluster-monitoring-view Users with this cluster role have the same access rights as cluster-monitoring-metrics-api role, with additional permissions, providing access to the /federate endpoint for the user-defined Prometheus. Must be bound with ClusterRoleBinding to gain access to the /federate endpoint for the user-defined Prometheus. Additional resources Granting user permissions by using the web console Granting user permissions by using the CLI 2.5. Configuring the monitoring stack In OpenShift Container Platform 4.12, you can configure the monitoring stack using the cluster-monitoring-config or user-workload-monitoring-config ConfigMap objects. Config maps configure the Cluster Monitoring Operator (CMO), which in turn configures the components of the stack. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object. To configure core OpenShift Container Platform monitoring components : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add your configuration under data/config.yaml as a key-value pair <component_name>: <component_configuration> : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: <configuration_for_the_component> Substitute <component> and <configuration_for_the_component> accordingly. The following example ConfigMap object configures a persistent volume claim (PVC) for Prometheus. This relates to the Prometheus instance that monitors core OpenShift Container Platform components only: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: 1 volumeClaimTemplate: spec: storageClassName: fast volumeMode: Filesystem resources: requests: storage: 40Gi 1 Defines the Prometheus component and the subsequent lines define its configuration. To configure components that monitor user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add your configuration under data/config.yaml as a key-value pair <component_name>: <component_configuration> : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: <configuration_for_the_component> Substitute <component> and <configuration_for_the_component> accordingly. The following example ConfigMap object configures a data retention period and minimum container resource requests for Prometheus. This relates to the Prometheus instance that monitors user-defined projects only: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: 1 retention: 24h 2 resources: requests: cpu: 200m 3 memory: 2Gi 4 1 Defines the Prometheus component and the subsequent lines define its configuration. 2 Configures a twenty-four hour data retention period for the Prometheus instance that monitors user-defined projects. 3 Defines a minimum resource request of 200 millicores for the Prometheus container. 4 Defines a minimum pod resource request of 2 GiB of memory for the Prometheus container. Note The Prometheus config map component is called prometheusK8s in the cluster-monitoring-config ConfigMap object and prometheus in the user-workload-monitoring-config ConfigMap object. Save the file to apply the changes to the ConfigMap object. Warning Different configuration changes to the ConfigMap object result in different outcomes: The pods are not redeployed. Therefore, there is no service outage. The affected pods are redeployed: For single-node clusters, this results in temporary service outage. For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available. Configuring and resizing a persistent volume always results in a service outage, regardless of high availability. Each procedure that requires a change in the config map includes its expected outcome. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps Enabling monitoring for user-defined projects 2.6. Configurable monitoring components This table shows the monitoring components you can configure and the keys used to specify the components in the cluster-monitoring-config and user-workload-monitoring-config ConfigMap objects: Table 2.2. Configurable monitoring components Component cluster-monitoring-config config map key user-workload-monitoring-config config map key Prometheus Operator prometheusOperator prometheusOperator Prometheus prometheusK8s prometheus Alertmanager alertmanagerMain alertmanager kube-state-metrics kubeStateMetrics openshift-state-metrics openshiftStateMetrics Telemeter Client telemeterClient Prometheus Adapter k8sPrometheusAdapter Thanos Querier thanosQuerier Thanos Ruler thanosRuler Note The Prometheus key is called prometheusK8s in the cluster-monitoring-config ConfigMap object and prometheus in the user-workload-monitoring-config ConfigMap object. 2.7. Using node selectors to move monitoring components By using the nodeSelector constraint with labeled nodes, you can move any of the monitoring stack components to specific nodes. By doing so, you can control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and segregate workloads based on specific requirements or policies. 2.7.1. How node selectors work with other constraints If you move monitoring components by using node selector constraints, be aware that other constraints to control pod scheduling might exist for a cluster: Topology spread constraints might be in place to control pod placement. Hard anti-affinity rules are in place for Prometheus, Thanos Querier, Alertmanager, and other monitoring components to ensure that multiple pods for these components are always spread across different nodes and are therefore always highly available. When scheduling pods onto nodes, the pod scheduler tries to satisfy all existing constraints when determining pod placement. That is, all constraints compound when the pod scheduler determines which pods will be placed on which nodes. Therefore, if you configure a node selector constraint but existing constraints cannot all be satisfied, the pod scheduler cannot match all constraints and will not schedule a pod for placement onto a node. To maintain resilience and high availability for monitoring components, ensure that enough nodes are available and match all constraints when you configure a node selector constraint to move a component. Additional resources Understanding how to update labels on nodes Placing pods on specific nodes using node selectors Placing pods relative to other pods using affinity and anti-affinity rules Controlling pod placement by using pod topology spread constraints Configuring pod topology spread constraints for monitoring Kubernetes documentation about node selectors 2.7.2. Moving monitoring components to different nodes To specify the nodes in your cluster on which monitoring stack components will run, configure the nodeSelector constraint in the component's ConfigMap object to match labels assigned to the nodes. Note You cannot add a node selector constraint directly to an existing scheduled pod. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: USD oc label nodes <node-name> <node-label> Edit the ConfigMap object: To move a component that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...> 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node-label-1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. To move a component that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...> 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node-label-1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps Enabling monitoring for user-defined projects 2.8. Assigning tolerations to monitoring components You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To assign tolerations to a component that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the alertmanagerMain component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" To assign tolerations to a component that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the thanosRuler component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps Enabling monitoring for user-defined projects See the OpenShift Container Platform documentation on taints and tolerations See the Kubernetes documentation on taints and tolerations 2.9. Setting the body size limit for metrics scraping By default, no limit exists for the uncompressed body size for data returned from scraped metrics targets. You can set a body size limit to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data. In addition, by setting a body size limit, you can reduce the impact that a malicious target might have on Prometheus and on the cluster as a whole. After you set a value for enforcedBodySizeLimit , the alert PrometheusScrapeBodySizeLimitHit fires when at least one Prometheus scrape target replies with a response body larger than the configured value. Note If metrics data scraped from a target has an uncompressed body size exceeding the configured size limit, the scrape fails. Prometheus then considers this target to be down and sets its up metric value to 0 , which can trigger the TargetDown alert. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a value for enforcedBodySizeLimit to data/config.yaml/prometheusK8s to limit the body size that can be accepted per target scrape: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB 1 1 Specify the maximum body size for scraped metrics targets. This enforcedBodySizeLimit example limits the uncompressed size per target scrape to 40 megabytes. Valid numeric values use the Prometheus data size format: B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The default value is 0 , which specifies no limit. You can also set the value to automatic to calculate the limit automatically based on cluster capacity. Save the file to apply the changes. The new configuration is applied automatically. Additional resources Prometheus scrape configuration documentation 2.10. Configuring a dedicated service monitor You can configure OpenShift Container Platform core platform monitoring to use dedicated service monitors to collect metrics for the resource metrics pipeline. When enabled, a dedicated service monitor exposes two additional metrics from the kubelet endpoint and sets the value of the honorTimestamps field to true. By enabling a dedicated service monitor, you can improve the consistency of Prometheus Adapter-based CPU usage measurements used by, for example, the oc adm top pod command or the Horizontal Pod Autoscaler. 2.10.1. Enabling a dedicated service monitor You can configure core platform monitoring to use a dedicated service monitor by configuring the dedicatedServiceMonitors key in the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add an enabled: true key-value pair as shown in the following sample: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: dedicatedServiceMonitors: enabled: true 1 1 Set the value of the enabled field to true to deploy a dedicated service monitor that exposes the kubelet /metrics/resource endpoint. Save the file to apply the changes automatically. Warning When you save changes to a cluster-monitoring-config config map, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also restart. 2.11. Configuring persistent storage Run cluster monitoring with persistent storage to gain the following benefits: Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. For production environments, it is highly recommended to configure persistent storage. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. 2.11.1. Persistent storage prerequisites Dedicate sufficient persistent storage to ensure that the disk does not become full. Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume. Important Do not use a raw block volume, which is described with volumeMode: Block in the PersistentVolume resource. Prometheus cannot use raw block volumes. Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant. 2.11.2. Configuring a persistent volume claim To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC). Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To configure a PVC for a component that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the core monitoring component for which you want to configure the PVC. 2 Specify an existing storage class. If a storage class is not specified, the default storage class is used. 3 Specify the amount of required storage. See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify volumeClaimTemplate . The following example configures a PVC that claims persistent storage for the Prometheus instance that monitors core OpenShift Container Platform components: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s : volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi To configure a PVC for a component that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the component for user-defined monitoring for which you want to configure the PVC. 2 Specify an existing storage class. If a storage class is not specified, the default storage class is used. 3 Specify the amount of required storage. See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify volumeClaimTemplate . The following example configures a PVC that claims persistent storage for Thanos Ruler: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler : volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied. Warning When you update the config map with a PVC configuration, the affected StatefulSet object is recreated, resulting in a temporary service outage. 2.11.3. Resizing a persistent volume You can resize a persistent volume (PV) for monitoring components, such as Prometheus, Thanos Ruler, or Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. Important You can only expand the size of the PVC. Shrinking the storage size is not possible. Prerequisites You have installed the OpenShift CLI ( oc ). If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have configured at least one PVC for core OpenShift Container Platform monitoring components. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have configured at least one PVC for components that monitor user-defined projects. Procedure Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes . Edit the ConfigMap object: If you are configuring core OpenShift Container Platform monitoring components : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a new storage size for the PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2 1 The component for which you want to change the storage size. 2 Specify the new size for the storage volume. It must be greater than the value. The following example sets the new PVC request to 100 gigabytes for the Prometheus instance that monitors core OpenShift Container Platform components: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi If you are configuring components that monitor user-defined projects : Note You can resize the volumes for the Thanos Ruler and for instances of Alertmanager and Prometheus that monitor user-defined projects. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Update the PVC configuration for the monitoring component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2 1 The component for which you want to change the storage size. 2 Specify the new size for the storage volume. It must be greater than the value. The following example sets the new PVC request to 20 gigabytes for Thanos Ruler: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Warning When you update the config map with a new storage size, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Prometheus database storage requirements Expanding persistent volume claims (PVCs) with a file system 2.11.4. Modifying the retention time and size for Prometheus metrics data By default, Prometheus retains metrics data for the following durations: Core platform monitoring : 15 days Monitoring for user-defined projects : 24 hours You can modify the retention time to change how soon data is deleted by specifying a time value in the retention field. You can also configure the maximum amount of disk space the retained metrics data uses by specifying a size value in the retentionSize field. If the data reaches this size limit, Prometheus deletes the oldest data first until the disk space used is again below the limit. Note the following behaviors of these data retention settings: The size-based retention policy applies to all data block directories in the /prometheus directory, including persistent blocks, write-ahead log (WAL) data, and m-mapped chunks. Data in the /wal and /head_chunks directories counts toward the retention size limit, but Prometheus never purges data from these directories based on size- or time-based retention policies. Thus, if you set a retention size limit lower than the maximum size set for the /wal and /head_chunks directories, you have configured the system not to retain any data blocks in the /prometheus data directories. The size-based retention policy is applied only when Prometheus cuts a new data block, which occurs every two hours after the WAL contains at least three hours of data. If you do not explicitly define values for either retention or retentionSize , retention time defaults to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. Retention size is not set. If you define values for both retention and retentionSize , both values apply. If any data blocks exceed the defined retention time or the defined size limit, Prometheus purges these data blocks. If you define a value for retentionSize and do not define retention , only the retentionSize value applies. If you do not define a value for retentionSize and only define a value for retention , only the retention value applies. If you set the retentionSize or retention value to 0 , the default settings apply. The default settings set retention time to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. By default, retention size is not set. Note Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To modify the retention time and size for the Prometheus instance that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add the retention time and size configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> 1 retentionSize: <size_specification> 2 1 The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . 2 The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance that monitors core OpenShift Container Platform components: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB To modify the retention time and size for the Prometheus instance that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time and size configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2 1 The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . 2 The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), or EB (exabytes). The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance that monitors user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 2.11.5. Modifying the retention time for Thanos Ruler metrics data By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Prerequisites You have installed the OpenShift CLI ( oc ). A cluster administrator has enabled monitoring for user-defined projects. You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1 1 Specify the retention time in the following format: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . The default is 24h . The following example sets the retention time to 10 days for Thanos Ruler data: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Creating a cluster monitoring config map Prometheus database storage requirements Recommended configurable storage technology Understanding persistent storage Optimizing storage Enabling monitoring for user-defined projects 2.12. Configuring remote write storage You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics. Prerequisites If you are configuring core OpenShift Container Platform monitoring components: You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects: You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. Important Red Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support. You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the same namespace as the Prometheus object for which you configure remote write: the openshift-monitoring namespace for default platform monitoring or the openshift-user-workload-monitoring namespace for user workload monitoring. Warning To reduce security risks, use HTTPS and authentication to send metrics to an endpoint. Procedure Follow these steps to configure remote write for default platform monitoring in the cluster-monitoring-config config map in the openshift-monitoring namespace. Note If you configure remote write for the Prometheus instance that monitors user-defined projects, make similar edits to the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Note that the Prometheus config map component is called prometheus in the user-workload-monitoring-config ConfigMap object and not prometheusK8s , as it is in the cluster-monitoring-config ConfigMap object. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a remoteWrite: section under data/config.yaml/prometheusK8s . Add an endpoint URL and authentication credentials in this section: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" 1 <endpoint_authentication_credentials> 2 1 The URL of the remote write endpoint. 2 The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods. Add write relabel configuration values after the authentication credentials: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> <your_write_relabel_configs> 1 1 The write relabel configuration settings. For <your_write_relabel_configs> substitute a list of write relabel configurations for metrics that you want to send to the remote endpoint. The following sample shows how to forward a single metric called my_metric : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep See the Prometheus relabel_config documentation for information about write relabel configuration options. Save the file to apply the changes. The new configuration is applied automatically. 2.12.1. Supported remote write authentication settings You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, Basic authentication, authentication using HTTP in an Authorization request header, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write. Authentication method Config map field Description AWS Signature Version 4 sigv4 This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. Basic authentication basicAuth Basic authentication sets the authorization header on every remote write request with the configured username and password. authorization authorization Authorization sets the Authorization header on every remote write request using the configured token. OAuth 2.0 oauth2 An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method simultaneously with authorization, AWS Signature Version 4, or Basic authentication. TLS client tlsConfig A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. 2.12.1.1. Config map location for authentication settings The following shows the location of the authentication configuration in the ConfigMap object for default platform monitoring. apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" 1 <endpoint_authentication_details> 2 1 The URL of the remote write endpoint. 2 The required configuration details for the authentication method for the endpoint. Currently supported authentication methods are Amazon Web Services (AWS) Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. Note If you configure remote write for the Prometheus instance that monitors user-defined projects, edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Note that the Prometheus config map component is called prometheus in the user-workload-monitoring-config ConfigMap object and not prometheusK8s , as it is in the cluster-monitoring-config ConfigMap object. 2.12.1.2. Example remote write authentication settings The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with default platform monitoring in the openshift-monitoring namespace. Sample YAML for AWS Signature Version 4 authentication The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque 1 The AWS API access key. 2 The AWS API secret key. The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7 1 The AWS region. 2 4 The name of the Secret object containing the AWS API access credentials. 3 The key that contains the AWS API access key in the specified Secret object. 5 The key that contains the AWS API secret key in the specified Secret object. 6 The name of the AWS profile that is being used to authenticate. 7 The unique identifier for the Amazon Resource Name (ARN) assigned to your role. Sample YAML for Basic authentication The following shows sample Basic authentication settings for a Secret object named rw-basic-auth in the openshift-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque 1 The username. 2 The password. The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint. apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4 1 3 The name of the Secret object that contains the authentication credentials. 2 The key that contains the username in the specified Secret object. 4 The key that contains the password in the specified Secret object. Sample YAML for authentication with a bearer token using a Secret Object The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-monitoring stringData: token: <authentication_token> 1 type: Opaque 1 The authentication token. The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true prometheusK8s: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3 1 The authentication type of the request. The default value is Bearer . 2 The name of the Secret object that contains the authentication credentials. 3 The key that contains the authentication token in the specified Secret object. Sample YAML for OAuth 2.0 authentication The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque 1 The Oauth 2.0 ID. 2 The OAuth 2.0 secret. The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://test.example.com/api/write" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2> 1 3 The name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object. 2 4 The key that contains the OAuth 2.0 credentials in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . 6 The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access. 7 The OAuth 2.0 authorization request parameters required for the authorization server. Sample YAML for TLS client authentication The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls 1 The CA certificate in the Prometheus container with which to validate the server certificate. 2 The client certificate for authentication with the server. 3 The client key. The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle . apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6 1 3 5 The name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object. 2 The key in the specified Secret object that contains the CA certificate for the endpoint. 4 The key in the specified Secret object that contains the client certificate for the endpoint. 6 The key in the specified Secret object that contains the client key secret. 2.12.2. Example remote write queue configuration You can use the queueConfig object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for default platform monitoring in the openshift-monitoring namespace. Example configuration of remote write parameters with default values apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 1 The number of samples to buffer per shard before they are dropped from the queue. 2 The minimum number of shards. 3 The maximum number of shards. 4 The maximum number of samples per send. 5 The maximum time for a sample to wait in buffer. 6 The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the maxbackoff time. 7 The maximum time to wait before retrying a failed request. 8 Set this parameter to true to retry a request after receiving a 429 status code from the remote write storage. Additional resources Prometheus REST API reference for remote write Setting up remote write compatible endpoints (Prometheus documentation) Tuning remote write settings (Prometheus documentation) Understanding secrets 2.13. Adding cluster ID labels to metrics If you manage multiple OpenShift Container Platform clusters and use the remote write feature to send metrics data from these clusters to an external storage location, you can add cluster ID labels to identify the metrics data coming from different clusters. You can then query these labels to identify the source cluster for a metric and distinguish that data from similar metrics data sent by other clusters. This way, if you manage many clusters for multiple customers and send metrics data to a single centralized storage system, you can use cluster ID labels to query metrics for a particular cluster or customer. Creating and using cluster ID labels involves three general steps: Configuring the write relabel settings for remote write storage. Adding cluster ID labels to the metrics. Querying these labels to identify the source cluster or customer for a metric. 2.13.1. Creating cluster ID labels for metrics You can create cluster ID labels for metrics for default platform monitoring and for user workload monitoring. For default platform monitoring, you add cluster ID labels for metrics in the write_relabel settings for remote write storage in the cluster-monitoring-config config map in the openshift-monitoring namespace. For user workload monitoring, you edit the settings in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Note When Prometheus scrapes user workload targets that expose a namespace label, the system stores this label as exported_namespace . This behavior ensures that the final namespace label value is equal to the namespace of the target pod. You cannot override this default configuration by setting the value of the honorLabels field to true for PodMonitor or ServiceMonitor objects. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured remote write storage. If you are configuring default platform monitoring components: You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects: You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Note If you configure cluster ID labels for metrics for the Prometheus instance that monitors user-defined projects, edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Note that the Prometheus component is called prometheus in this config map and not prometheusK8s , which is the name used in the cluster-monitoring-config config map. In the writeRelabelConfigs: section under data/config.yaml/prometheusK8s/remoteWrite , add cluster ID relabel configuration values: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2 1 Add a list of write relabel configurations for metrics that you want to send to the remote endpoint. 2 Substitute the label configuration for the metrics sent to the remote write endpoint. The following sample shows how to forward a metric with the cluster ID label cluster_id in default platform monitoring: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3 1 The system initially applies a temporary cluster ID source label named __tmp_openshift_cluster_id__ . This temporary label gets replaced by the cluster ID label name that you specify. 2 Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use __tmp_openshift_cluster_id__ . The final relabeling step removes labels that use this name. 3 The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified. Save the file to apply the changes. The new configuration is applied automatically. Additional resources Configuring remote write storage Obtaining your cluster ID 2.14. Controlling the impact of unbound metrics attributes in user-defined projects Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. Using many unbound attributes in labels can create exponentially more time series, which can impact Prometheus performance and available disk space. Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects: Limit the number of samples that can be accepted per target scrape in user-defined projects Limit the number of scraped labels, the length of label names, and the length of label values. Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped Note To prevent issues caused by adding many unbound attributes, limit the number of scrape samples, label names, and unbound attributes you define for metrics. Also reduce the number of potential key-value pair combinations by using attributes that are bound to a limited set of possible values. 2.14.1. Setting scrape sample and label limits for user-defined projects You can limit the number of samples that can be accepted per target scrape in user-defined projects. You can also limit the number of scraped labels, the length of label names, and the length of label values. Warning If you set sample or label limits, no further sample data is ingested for that target scrape after the limit is reached. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the enforcedSampleLimit configuration to data/config.yaml to limit the number of samples that can be accepted per target scrape in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1 1 A value is required if this parameter is specified. This enforcedSampleLimit example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000. Add the enforcedLabelLimit , enforcedLabelNameLengthLimit , and enforcedLabelValueLengthLimit configurations to data/config.yaml to limit the number of scraped labels, the length of label names, and the length of label values in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3 1 Specifies the maximum number of labels per scrape. The default value is 0 , which specifies no limit. 2 Specifies the maximum length in characters of a label name. The default value is 0 , which specifies no limit. 3 Specifies the maximum length in characters of a label value. The default value is 0 , which specifies no limit. Save the file to apply the changes. The limits are applied automatically. 2.14.2. Creating scrape sample alerts You can create alerts that notify you when: The target cannot be scraped or is not available for the specified for duration A scrape sample threshold is reached or is exceeded for the specified for duration Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit . You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf "%.4g" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11 1 Defines the name of the alerting rule. 2 Specifies the user-defined project where the alerting rule will be deployed. 3 The TargetDown alert will fire if the target cannot be scraped or is not available for the for duration. 4 The message that will be output when the TargetDown alert fires. 5 The conditions for the TargetDown alert must be true for this duration before the alert is fired. 6 Defines the severity for the TargetDown alert. 7 The ApproachingEnforcedSamplesLimit alert will fire when the defined scrape sample threshold is reached or exceeded for the specified for duration. 8 The message that will be output when the ApproachingEnforcedSamplesLimit alert fires. 9 The threshold for the ApproachingEnforcedSamplesLimit alert. In this example the alert will fire when the number of samples per target scrape has exceeded 80% of the enforced sample limit of 50000 . The for duration must also have passed before the alert will fire. The <number> in the expression scrape_samples_scraped/<number> > <threshold> must match the enforcedSampleLimit value defined in the user-workload-monitoring-config ConfigMap object. 10 The conditions for the ApproachingEnforcedSamplesLimit alert must be true for this duration before the alert is fired. 11 Defines the severity for the ApproachingEnforcedSamplesLimit alert. Apply the configuration to the user-defined project: USD oc apply -f monitoring-stack-alerts.yaml Additional resources Creating a user-defined workload monitoring config map Enabling monitoring for user-defined projects See Determining why Prometheus is consuming a lot of disk space for steps to query which metrics have the highest number of scrape samples.
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc -n openshift-monitoring get configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |", "oc apply -f cluster-monitoring-config.yaml", "oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |", "oc apply -f user-workload-monitoring-config.yaml", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: <configuration_for_the_component>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: 1 volumeClaimTemplate: spec: storageClassName: fast volumeMode: Filesystem resources: requests: storage: 40Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: <configuration_for_the_component>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: 1 retention: 24h 2 resources: requests: cpu: 200m 3 memory: 2Gi 4", "oc label nodes <node-name> <node-label>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...>", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: dedicatedServiceMonitors: enabled: true 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s : volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler : volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> 1 retentionSize: <size_specification> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> <your_write_relabel_configs> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_details> 2", "apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7", "apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4", "apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-monitoring stringData: token: <authentication_token> 1 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3", "apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>", "apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11", "oc apply -f monitoring-stack-alerts.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/configuring-the-monitoring-stack
Chapter 9. Gathering the observability data from multiple clusters
Chapter 9. Gathering the observability data from multiple clusters For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance. Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack instance is deployed on the cluster. The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1. Procedure Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates. An Issuer to generate the certificates by using the cert-manager Operator for Red Hat OpenShift. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {} A self-signed certificate. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io A CA issuer. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret The client and server certificates. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 2 issuerRef: name: ca-issuer 1 List of exact DNS names to be mapped to a solver in the server OpenTelemetry Collector instance. 2 List of exact DNS names to be mapped to a solver in the client OpenTelemetry Collector instance. Create a service account for the OpenTelemetry Collector instance. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespace resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the edge clusters. Example OpenTelemetryCollector custom resource for the edge clusters apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster. Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the central cluster. Example OpenTelemetryCollector custom resource for the central cluster apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: "deployment" ingress: type: route route: termination: "passthrough" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: "tempo-<simplest>-distributor:4317" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector receiver requires the certificates listed in the first step. 2 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, which in this example is "tempo-simplest-distributor:4317" and already created.
[ "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/otel-gathering-observability-data-from-multiple-clusters
4.102. isdn4k-utils
4.102. isdn4k-utils 4.102.1. RHBA-2011:1169 - isdn4k-utils bug fix update Updated isdn4k-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The isdn4k-utils package contains a collection of utilities needed for configuring an ISDN subsystem. Bug Fixes BZ# 618653 Prior to this update, the isdn and capi init scripts were not LSB compatible. Due to this problem, the isdn and capi init scripts exited with incorrect or invalid exit statuses. This update modifies the init scripts so that they are LSB compatible. Now the init scripts exit with the correct exit status. (BZ# 618549 ) All users of isdn4k-utils are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/isdn4k-utils
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_google_cloud/providing-feedback-on-red-hat-documentation_gcp
function::proc_mem_size_pid
function::proc_mem_size_pid Name function::proc_mem_size_pid - Total program virtual memory size in pages Synopsis Arguments pid The pid of process to examine Description Returns the total virtual memory size in pages of the given process, or zero when that process doesn't exist or the number of pages couldn't be retrieved.
[ "proc_mem_size_pid:long(pid:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-proc-mem-size-pid
function::stop_stopwatch
function::stop_stopwatch Name function::stop_stopwatch - Stop a stopwatch Synopsis Arguments name the stopwatch name Description Stop stopwatch name . Creates stopwatch name if it does not currently exist.
[ "stop_stopwatch(name:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-stop-stopwatch
Chapter 11. Identity: Managing Services
Chapter 11. Identity: Managing Services Some services that run on a host can also belong to the IdM domain. Any service that can store a Kerberos principal or an SSL certificate (or both) can be configured as an IdM service. Adding a service to the IdM domain allows the service to request an SSL certificate or keytab from the domain. (Only the public key for the certificate is stored in the service record. The private key is local to the service.) An IdM domain establishes a commonality between machines, with common identity information, common policies, and shared services. Any machine which belongs to a domain functions as a client of the domain, which means it uses the services that the domain provides. An IdM domain (as described in Section 1.2, "Bringing Linux Services Together" ) provides three main services specifically for machines: DNS Kerberos Certificate management 11.1. Adding and Editing Service Entries and Keytabs As with host entries, service entries for the host (and any other services on that host which will belong to the domain) must be added manually to the IdM domain. This is a two step process. First, the service entry must be created, and then a keytab must be created for that service which it will use to access the domain. By default, Identity Management saves its HTTP keytab to /etc/httpd/conf/ipa.keytab . Note This keytab is used for the web UI. If a key were stored in ipa.keytab and that keytab file is deleted, the IdM web UI will stop working, because the original key would also be deleted. Similar locations can be specified for each service that needs to be made Kerberos aware. There is no specific location that must be used, but, when using ipa-getkeytab , you should avoid using /etc/krb5.keytab . This file should not contain service-specific keytabs; each service should have its keytab saved in a specific location and the access privileges (and possibly SELinux rules) should be configured so that only this service has access to the keytab. 11.1.1. Adding Services and Keytabs from the Web UI Open the Identity tab, and select the Services subtab. Click the Add link at the top of the services list. Select the service type from the drop-down menu, and give it a name. Select the hostname of the IdM host on which the service is running. The hostname is used to construct the full service principal name. Click the Add button to save the new service principal. Use the ipa-getkeytab command to generate and assign the new keytab for the service principal. The realm name is optional. The IdM server automatically appends the Kerberos realm for which it is configured. You cannot specify a different realm. The hostname must resolve to a DNS A record for it to work with Kerberos. You can use the --force flag to force the creation of a principal should this prove necessary. The -e argument can include a comma-separated list of encryption types to include in the keytab. This supersedes any default encryption type. Warning Creating a new key resets the secret for the specified principal. This means that all other keytabs for that principal are rendered invalid. 11.1.2. Adding Services and Keytabs from the Command Line Create the service principal. The service is recognized through a name like service/FQDN : For example: Create the service keytab file using the ipa-getkeytab command. This command is run on the client in the IdM domain. (Actually, it can be run on any IdM server or client, and then the keys copied to the appropriate machine. However, it is simplest to run the command on the machine with the service being created.) The command requires the Kerberos service principal ( -p ), the IdM server name ( -s ), the file to write ( -k ), and the encryption method ( -e ). Be sure to copy the keytab to the appropriate directory for the service. For example: The realm name is optional. The IdM server automatically appends the Kerberos realm for which it is configured. You cannot specify a different realm. The hostname must resolve to a DNS A record for it to work with Kerberos. You can use the --force flag to force the creation of a principal should this prove necessary. The -e argument can include a comma-separated list of encryption types to include in the keytab. This supersedes any default encryption type. Warning The ipa-getkeytab command resets the secret for the specified principal. This means that all other keytabs for that principal are rendered invalid.
[ "# ipa-getkeytab -s ipaserver.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts", "ipa service-add serviceName/hostname", "ipa service-add HTTP/server.example.com ------------------------------------------------------- Added service \"HTTP/[email protected]\" ------------------------------------------------------- Principal: HTTP/[email protected] Managed by: ipaserver.example.com", "ipa-getkeytab -s server.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/services
Part I. Notable Bug Fixes
Part I. Notable Bug Fixes This part describes bugs fixed in Red Hat Enterprise Linux 6.10 that have a significant impact on users.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/part-red_hat_enterprise_linux-6.10_technical_notes-notable_bug_fixes
Chapter 3. Installing a cluster on OpenStack with customizations
Chapter 3. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.14, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.14 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 3.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 3.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 3.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.4. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.2.4.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.1. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 3.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 3.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.10.2. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 3.10.3. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 3.10.4. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 3.10.4.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 3.10.4.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 3.10.5. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 3.10.6. Optional: Configuring a cluster with dual-stack networking Important Dual-stack configuration for OpenStack is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can create a dual-stack cluster on RHOSP. However, the dual-stack configuration is enabled only if you are using an RHOSP network with IPv4 and IPv6 subnets. Note RHOSP does not support the following configurations: Conversion of an IPv4 single-stack cluster to a dual-stack cluster network. IPv6 as the primary address family for dual-stack cluster network. 3.10.6.1. Deploying the dual-stack cluster For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. Prerequisites You enabled Dynamic Host Configuration Protocol (DHCP) on the subnets. Procedure Create a network with IPv4 and IPv6 subnets. The available address modes for ipv6-ra-mode and ipv6-address-mode fields are: stateful , stateless and slaac . Note The dual-stack network MTU must accommodate both the minimum MTU for IPv6, which is 1280 , and the OVN-Kubernetes encapsulation overhead, which is 100 . Create the API and Ingress VIPs ports. Add the IPv6 subnet to the router to enable router advertisements. If you are using a provider network, you can enable router advertisements by adding the network as an external gateway, which also enables external connectivity. For an IPv4/IPv6 dual-stack cluster where you set IPv4 as the primary endpoint for your cluster nodes, edit the install-config.yaml file like the following example: apiVersion: v1 baseDomain: mydomain.test featureSet: TechPreviewNoUpgrade 1 compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 2 - cidr: "192.168.25.0/24" - cidr: "fd2e:6f44:5dd8:c956::/64" clusterNetwork: 3 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 4 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 5 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 6 controlPlanePort: 7 fixedIPs: 8 - subnet: 9 name: subnet-v4 id: subnet-v4-id - subnet: 10 name: subnet-v6 id: subnet-v6-id network: 11 name: dualstack id: network-id 1 Dual-stack clusters are supported only with the TechPreviewNoUpgrade value. 2 3 4 You must specify an IP address range in the cidr field for both IPv4 and IPv6 address families. 5 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 6 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 7 Specify the dual-stack network details that are used by all the nodes across the cluster. 8 The Classless Inter-Domain Routing (CIDR) of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 9 10 11 You can specify a value for name , id , or both. Note The ip=dhcp,dhcp6 kernel argument, which is set on all of the nodes, results in a single Network Manager connection profile that is activated on multiple interfaces simultaneously. Because of this behavior, any additional network has the same connection enforced with an identical UUID. If you need an interface-specific configuration, create a new connection profile for that interface so that the default connection is no longer enforced on it. 3.10.7. Installation configuration for a cluster on OpenStack with a user-managed load balancer The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer. apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 1 Regardless of which load balancer you use, the load balancer is deployed to this subnet. 2 The UserManaged value indicates that you are using an user-managed load balancer. 3.11. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.12. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 3.12.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 3.12.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 3.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 3.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "openstack role add --user <user> --project <project> swiftoperator", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>", "oc apply -f <storage_class_file_name>", "storageclass.storage.k8s.io/custom-csi-storageclass created", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3", "oc apply -f <pvc_file_name>", "persistentvolumeclaim/csi-pvc-imageregistry created", "oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'", "config.imageregistry.operator.openshift.io/cluster patched", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "status: managementState: Managed pvc: claim: csi-pvc-imageregistry", "oc get pvc -n openshift-image-registry csi-pvc-imageregistry", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3", "./openshift-install wait-for install-complete --log-level debug", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: mydomain.test featureSet: TechPreviewNoUpgrade 1 compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 2 - cidr: \"192.168.25.0/24\" - cidr: \"fd2e:6f44:5dd8:c956::/64\" clusterNetwork: 3 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 4 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 5 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 6 controlPlanePort: 7 fixedIPs: 8 - subnet: 9 name: subnet-v4 id: subnet-v4-id - subnet: 10 name: subnet-v6 id: subnet-v6-id network: 11 name: dualstack id: network-id", "apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_openstack/installing-openstack-installer-custom
Chapter 5. Managing user-owned OAuth access tokens
Chapter 5. Managing user-owned OAuth access tokens Users can review their own OAuth access tokens and delete any that are no longer needed. 5.1. Listing user-owned OAuth access tokens You can list your user-owned OAuth access tokens. Token names are not sensitive and cannot be used to log in. Procedure List all user-owned OAuth access tokens: USD oc get useroauthaccesstokens Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full List user-owned OAuth access tokens for a particular OAuth client: USD oc get useroauthaccesstokens --field-selector=clientName="console" Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full 5.2. Viewing the details of a user-owned OAuth access token You can view the details of a user-owned OAuth access token. Procedure Describe the details of a user-owned OAuth access token: USD oc describe useroauthaccesstokens <token_name> Example output Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none> 1 The token name, which is the sha256 hash of the token. Token names are not sensitive and cannot be used to log in. 2 The client name, which describes where the token originated from. 3 The value in seconds from the creation time before this token expires. 4 If there is a token inactivity timeout set for the OAuth server, this is the value in seconds from the creation time before this token can no longer be used. 5 The scopes for this token. 6 The user name associated with this token. 5.3. Deleting user-owned OAuth access tokens The oc logout command only invalidates the OAuth token for the active session. You can use the following procedure to delete any user-owned OAuth tokens that are no longer needed. Deleting an OAuth access token logs out the user from all sessions that use the token. Procedure Delete the user-owned OAuth access token: USD oc delete useroauthaccesstokens <token_name> Example output useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted
[ "oc get useroauthaccesstokens", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc get useroauthaccesstokens --field-selector=clientName=\"console\"", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc describe useroauthaccesstokens <token_name>", "Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>", "oc delete useroauthaccesstokens <token_name>", "useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/authentication_and_authorization/managing-oauth-access-tokens
Chapter 5. Deploying the HCI nodes
Chapter 5. Deploying the HCI nodes For DCN sites, you can deploy a hyper-converged infrastructure (HCI) stack that uses Compute and Ceph Storage on a single node. For example, the following diagram shows two DCN stacks named dcn0 and dcn1 , each in their own availability zone (AZ). Each DCN stack has its own Ceph cluster and Compute services: The procedures in Configuring the distributed compute node (DCN) environment files and Deploying HCI nodes to the distributed compute node (DCN) site describe this deployment method. These procedures demonstrate how to add a new DCN stack to your deployment and reuse the configuration from the existing heat stack to create new environment files. In the example procedures, the first heat stack deploys an overcloud within a centralized data center. Another heat stack is then created to deploy a batch of Compute nodes to a remote physical location. 5.1. Configuring the distributed compute node environment files This procedure retrieves the metadata of your central site and then generates the configuration files that the distributed compute node (DCN) sites require: Procedure Export stack information from the central stack. You must deploy the control-plane stack before running this command: Important This procedure creates a new control-plane-export.yaml environment file and uses the passwords in the plan-environment.yaml from the overcloud. The control-plane-export.yaml file contains sensitive security data. You can remove the file when you no longer require it to improve security. 5.2. Deploying HCI nodes to the distributed compute node site This procedure uses the DistributedComputeHCI role to deploy HCI nodes to an availability zone (AZ) named dcn0 . This role is used specifically for distributed compute HCI nodes. Note CephMon runs on the HCI nodes and cannot run on the central Controller node. Additionally, the central Controller node is deployed without Ceph. Procedure Review the overrides for the distributed compute node (DCN) site in dcn0/overrides.yaml : Review the proposed Ceph configuration in dcn0/ceph.yaml . Replace the values for the following parameters with values that suit your environment. For more information, see the Deploying an overcloud with containerized Red Hat Ceph and Hyperconverged Infrastructure guides. CephAnsibleExtraConfig DistributedComputeHCIParameters CephPoolDefaultPgNum CephPoolDefaultSize DistributedComputeHCIExtraConfig Create a new file called nova-az.yaml with the following contents: Provided that the overcloud can access the endpoints that are listed in the centralrc file created by the central deployment, this command creates an AZ called dcn0 , with the new HCI Compute nodes added to that AZ during deployment. Run the deploy.sh deployment script for dcn0 : When the overcloud deployment finishes, see the post-deployment configuration steps and checks in Chapter 6, Post-deployment configuration .
[ "openstack overcloud export --config-download-dir /var/lib/mistral/central --stack central --output-file ~/dcn-common/control-plane-export.yaml \\", "parameter_defaults: DistributedComputeHCICount: 3 DistributedComputeHCIFlavor: baremetal DistributedComputeHCISchedulerHints: 'capabilities:node': '0-ceph-%index%' CinderStorageAvailabilityZone: dcn0 NovaAZAttach: false", "parameter_defaults: CephClusterName: dcn0 NovaEnableRbdBackend: false CephAnsiblePlaybookVerbosity: 3 CephPoolDefaultPgNum: 256 CephPoolDefaultSize: 3 CephAnsibleDisksConfig: osd_scenario: lvm osd_objectstore: bluestore devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd - /dev/sde - /dev/sdf - /dev/sdg - /dev/sdh - /dev/sdi - /dev/sdj - /dev/sdk - /dev/sdl ## Everything below this line is for HCI Tuning CephAnsibleExtraConfig: ceph_osd_docker_cpu_limit: 1 is_hci: true CephConfigOverrides: osd_recovery_op_priority: 3 osd_recovery_max_active: 3 osd_max_backfills: 1 ## Set relative to your hardware: # DistributedComputeHCIParameters: # NovaReservedHostMemory: 181000 # DistributedComputeHCIExtraConfig: # nova::cpu_allocation_ratio: 8.2", "resource_registry: OS::TripleO::Services::NovaAZConfig: /usr/share/openstack-tripleo-heat-templates/deployment/nova/nova-az-config.yaml parameter_defaults: NovaComputeAvailabilityZone: dcn0 RootStackName: central", "#!/bin/bash STACK=dcn0 source ~/stackrc if [[ ! -e distributed_compute_hci.yaml ]]; then openstack overcloud roles generate DistributedComputeHCI -o distributed_compute_hci.yaml fi time openstack overcloud deploy --stack USDSTACK --templates /usr/share/openstack-tripleo-heat-templates/ -r distributed_compute_hci.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-volume-active-active.yaml -e ~/dcn-common/control-plane-export.yaml -e ~/containers-env-file.yaml -e ceph.yaml -e nova-az.yaml -e overrides.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_distributed_compute_nodes_with_separate_heat_stacks/proc_deploying-the-hci-nodes
Chapter 9. Installing a three-node cluster on VMC
Chapter 9. Installing a three-node cluster on VMC In OpenShift Container Platform version 4.13, you can install a three-node cluster on your VMware vSphere instance by deploying it to VMware Cloud (VMC) on AWS . A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. 9.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 If you are deploying a cluster with user-provisioned infrastructure: Configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. In a three-node cluster, the Ingress Controller pods run on the control plane nodes. For more information, see the "Load balancing requirements for user-provisioned infrastructure". After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on VMC with user-provisioned infrastructure". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 9.2. steps Installing a cluster on VMC with customizations Installing a cluster on VMC with user-provisioned infrastructure
[ "compute: - name: worker platform: {} replicas: 0", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vmc/installing-vmc-three-node
Chapter 14. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure
Chapter 14. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) using infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the AWS APIs. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 14.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 14.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 14.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 14.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 14.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 14.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 14.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 14.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 14.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 14.4.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 14.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 14.4.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 14.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 14.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 14.5. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 14.5.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 14.5.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 14.5.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 14.3. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 14.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 14.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 14.6. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagRole Note If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 14.7. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 14.8. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketTagging s3:PutEncryptionConfiguration Example 14.9. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 14.10. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 14.11. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 14.12. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 14.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 14.14. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 14.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 14.16. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole 14.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 14.7. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 14.7.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 14.7.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- Add the image content resources: imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Use the imageContentSources section from the output of the command to mirror the repository or the values that you used when you mirrored the content from the media that you brought into your restricted network. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 14.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.7.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Manually creating long-term credentials 14.8. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 14.9. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 14.9.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 14.17. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable 14.10. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 14.10.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 14.18. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.8" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.8" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 14.11. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 14.11.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 14.19. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile 14.12. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 14.13. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 14.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-01860370941726bdd ap-east-1 ami-05bc702cdaf7e4251 ap-northeast-1 ami-098932fd93c15690d ap-northeast-2 ami-006f4e02d97910a36 ap-northeast-3 ami-0c4bd5b1724f82273 ap-south-1 ami-0cbf22b638724853d ap-south-2 ami-031f4d165f4b429c4 ap-southeast-1 ami-0dc3e381a731ab9fc ap-southeast-2 ami-032ae8d0f287a66a6 ap-southeast-3 ami-0393130e034b86423 ap-southeast-4 ami-0b38f776bded7d7d7 ca-central-1 ami-058ea81b3a1d17edd eu-central-1 ami-011010debd974a250 eu-central-2 ami-0623b105ae811a5e2 eu-north-1 ami-0c4bb9ce04f3526d4 eu-south-1 ami-06c29eccd3d74df52 eu-south-2 ami-00e0b5f3181a3f98b eu-west-1 ami-087bfa513dc600676 eu-west-2 ami-0ebad59c0e9554473 eu-west-3 ami-074e63b65eaf83f96 me-central-1 ami-0179d6ae1d908ace9 me-south-1 ami-0b60c75273d3efcd7 sa-east-1 ami-0913cbfbfa9a7a53c us-east-1 ami-0f71dcd99e6a1cd53 us-east-2 ami-0545fae7edbbbf061 us-gov-east-1 ami-081eabdc478e501e5 us-gov-west-1 ami-076102c394767f319 us-west-1 ami-0609e4436c4ae5eff us-west-2 ami-0c5d3e03c0ab9b19a Table 14.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-08dd66a61a2caa326 ap-east-1 ami-0232cd715f8168c34 ap-northeast-1 ami-0bc0b17618da96700 ap-northeast-2 ami-0ee505fb62eed2fd6 ap-northeast-3 ami-0462cd2c3b7044c77 ap-south-1 ami-0e0b4d951b43adc58 ap-south-2 ami-06d457b151cc0e407 ap-southeast-1 ami-0874e1640dfc15f17 ap-southeast-2 ami-05f215734ceb0f5ad ap-southeast-3 ami-073030df265c88b3f ap-southeast-4 ami-043f4c40a6fc3238a ca-central-1 ami-0840622f99a32f586 eu-central-1 ami-09a5e6ebe667ae6b5 eu-central-2 ami-0835cb1bf387e609a eu-north-1 ami-069ddbda521a10a27 eu-south-1 ami-09c5cc21026032b4c eu-south-2 ami-0c36ab2a8bbeed045 eu-west-1 ami-0d2723c8228cb2df3 eu-west-2 ami-0abd66103d069f9a8 eu-west-3 ami-08c7249d59239fc5c me-central-1 ami-0685f33ebb18445a2 me-south-1 ami-0466941f4e5c56fe6 sa-east-1 ami-08cdc0c8a972f4763 us-east-1 ami-0d461970173c4332d us-east-2 ami-0e9cdc0e85e0a6aeb us-gov-east-1 ami-0b896df727672ce09 us-gov-west-1 ami-0b896df727672ce09 us-west-1 ami-027b7cc5f4c74e6c1 us-west-2 ami-0b189d89b44bdfbf2 14.14. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 14.14.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 14.20. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 14.15. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 14.15.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 14.21. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] 14.16. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 14.16.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 14.22. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp 14.17. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. 14.18. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.19. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 14.20. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. 14.20.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 14.20.2. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.20.2.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 14.20.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 14.21. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 14.22. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 14.23. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Register your cluster on the Cluster registration page. /validating-an-installation.adoc 14.24. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 14.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.26. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 14.27. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'", "ami-0af1d3b7fa5be2131", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_aws/installing-restricted-networks-aws
Chapter 7. Configuring identity providers
Chapter 7. Configuring identity providers 7.1. Configuring an htpasswd identity provider Configure the htpasswd identity provider to allow users to log in to OpenShift Container Platform with credentials from an htpasswd file. To define an htpasswd identity provider, perform the following tasks: Create an htpasswd file to store the user and password information. Create a secret to represent the htpasswd file. Define an htpasswd identity provider resource that references the secret. Apply the resource to the default OAuth configuration to add the identity provider. 7.1.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.1.2. About htpasswd authentication Using htpasswd authentication in OpenShift Container Platform allows you to identify users based on an htpasswd file. An htpasswd file is a flat file that contains the user name and hashed password for each user. You can use the htpasswd utility to create this file. Warning Do not use htpasswd authentication in OpenShift Container Platform for production environments. Use htpasswd authentication only for development environments. 7.1.3. Creating the htpasswd file See one of the following sections for instructions about how to create the htpasswd file: Creating an htpasswd file using Linux Creating an htpasswd file using Windows 7.1.3.1. Creating an htpasswd file using Linux To use the htpasswd identity provider, you must generate a flat file that contains the user names and passwords for your cluster by using htpasswd . Prerequisites Have access to the htpasswd utility. On Red Hat Enterprise Linux this is available by installing the httpd-tools package. Procedure Create or update your flat file with a user name and hashed password: USD htpasswd -c -B -b </path/to/users.htpasswd> <username> <password> The command generates a hashed version of the password. For example: USD htpasswd -c -B -b users.htpasswd <username> <password> Example output Adding password for user user1 Continue to add or update credentials to the file: USD htpasswd -B -b </path/to/users.htpasswd> <user_name> <password> 7.1.3.2. Creating an htpasswd file using Windows To use the htpasswd identity provider, you must generate a flat file that contains the user names and passwords for your cluster by using htpasswd . Prerequisites Have access to htpasswd.exe . This file is included in the \bin directory of many Apache httpd distributions. Procedure Create or update your flat file with a user name and hashed password: > htpasswd.exe -c -B -b <\path\to\users.htpasswd> <username> <password> The command generates a hashed version of the password. For example: > htpasswd.exe -c -B -b users.htpasswd <username> <password> Example output Adding password for user user1 Continue to add or update credentials to the file: > htpasswd.exe -b <\path\to\users.htpasswd> <username> <password> 7.1.4. Creating the htpasswd secret To use the htpasswd identity provider, you must define a secret that contains the htpasswd user file. Prerequisites Create an htpasswd file. Procedure Create a Secret object that contains the htpasswd users file: USD oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1 1 The secret key containing the users file for the --from-file argument must be named htpasswd , as shown in the above command. Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents> 7.1.5. Sample htpasswd CR The following custom resource (CR) shows the parameters and acceptable values for an htpasswd identity provider. htpasswd CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.1.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.1.7. Updating users for an htpasswd identity provider You can add or remove users from an existing htpasswd identity provider. Prerequisites You have created a Secret object that contains the htpasswd user file. This procedure assumes that it is named htpass-secret . You have configured an htpasswd identity provider. This procedure assumes that it is named my_htpasswd_provider . You have access to the htpasswd utility. On Red Hat Enterprise Linux this is available by installing the httpd-tools package. You have cluster administrator privileges. Procedure Retrieve the htpasswd file from the htpass-secret Secret object and save the file to your file system: USD oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd Add or remove users from the users.htpasswd file. To add a new user: USD htpasswd -bB users.htpasswd <username> <password> Example output Adding password for user <username> To remove an existing user: USD htpasswd -D users.htpasswd <username> Example output Deleting password for user <username> Replace the htpass-secret Secret object with the updated users in the users.htpasswd file: USD oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f - Tip You can alternatively apply the following YAML to replace the secret: apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents> If you removed one or more users, you must additionally remove existing resources for each user. Delete the User object: USD oc delete user <username> Example output user.user.openshift.io "<username>" deleted Be sure to remove the user, otherwise the user can continue using their token as long as it has not expired. Delete the Identity object for the user: USD oc delete identity my_htpasswd_provider:<username> Example output identity.user.openshift.io "my_htpasswd_provider:<username>" deleted 7.1.8. Configuring identity providers using the web console Configure your identity provider (IDP) through the web console instead of the CLI. Prerequisites You must be logged in to the web console as a cluster administrator. Procedure Navigate to Administration Cluster Settings . Under the Configuration tab, click OAuth . Under the Identity Providers section, select your identity provider from the Add drop-down menu. Note You can specify multiple IDPs through the web console without overwriting existing IDPs. 7.2. Configuring a Keystone identity provider Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. This configuration allows users to log in to OpenShift Container Platform with their Keystone credentials. 7.2.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.2.2. About Keystone authentication Keystone is an OpenStack project that provides identity, token, catalog, and policy services. You can configure the integration with Keystone so that the new OpenShift Container Platform users are based on either the Keystone user names or unique Keystone IDs. With both methods, users log in by entering their Keystone user name and password. Basing the OpenShift Container Platform users on the Keystone ID is more secure because if you delete a Keystone user and create a new Keystone user with that user name, the new user might have access to the old user's resources. 7.2.3. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object that contains the key and certificate by using the following command: USD oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key> 7.2.4. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.2.5. Sample Keystone CR The following custom resource (CR) shows the parameters and acceptable values for a Keystone identity provider. Keystone CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: keystoneidp 1 mappingMethod: claim 2 type: Keystone keystone: domainName: default 3 url: https://keystone.example.com:5000 4 ca: 5 name: ca-config-map tlsClientCert: 6 name: client-cert-secret tlsClientKey: 7 name: client-key-secret 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 Keystone domain name. In Keystone, usernames are domain-specific. Only a single domain is supported. 4 The URL to use to connect to the Keystone server (required). This must use https. 5 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. 6 Optional: Reference to an OpenShift Container Platform Secret object containing the client certificate to present when making requests to the configured URL. 7 Reference to an OpenShift Container Platform Secret object containing the key for the client certificate. Required if tlsClientCert is specified. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.2.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.3. Configuring an LDAP identity provider Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. 7.3.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.3.2. About LDAP authentication During authentication, the LDAP directory is searched for an entry that matches the provided user name. If a single unique match is found, a simple bind is attempted using the distinguished name (DN) of the entry plus the provided password. These are the steps taken: Generate a search filter by combining the attribute and filter in the configured url with the user-provided user name. Search the directory using the generated filter. If the search does not return exactly one entry, deny access. Attempt to bind to the LDAP server using the DN of the entry retrieved from the search, and the user-provided password. If the bind is unsuccessful, deny access. If the bind is successful, build an identity using the configured attributes as the identity, email address, display name, and preferred user name. The configured url is an RFC 2255 URL, which specifies the LDAP host and search parameters to use. The syntax of the URL is: For this URL: URL component Description ldap For regular LDAP, use the string ldap . For secure LDAP (LDAPS), use ldaps instead. host:port The name and port of the LDAP server. Defaults to localhost:389 for ldap and localhost:636 for LDAPS. basedn The DN of the branch of the directory where all searches should start from. At the very least, this must be the top of your directory tree, but it could also specify a subtree in the directory. attribute The attribute to search for. Although RFC 2255 allows a comma-separated list of attributes, only the first attribute will be used, no matter how many are provided. If no attributes are provided, the default is to use uid . It is recommended to choose an attribute that will be unique across all entries in the subtree you will be using. scope The scope of the search. Can be either one or sub . If the scope is not provided, the default is to use a scope of sub . filter A valid LDAP search filter. If not provided, defaults to (objectClass=*) When doing searches, the attribute, filter, and provided user name are combined to create a search filter that looks like: For example, consider a URL of: When a client attempts to connect using a user name of bob , the resulting search filter will be (&(enabled=true)(cn=bob)) . If the LDAP directory requires authentication to search, specify a bindDN and bindPassword to use to perform the entry search. 7.3.3. Creating the LDAP secret To use the identity provider, you must define an OpenShift Container Platform Secret object that contains the bindPassword field. Procedure Create a Secret object that contains the bindPassword field: USD oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1 1 The secret key containing the bindPassword for the --from-literal argument must be called bindPassword . Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: ldap-secret namespace: openshift-config type: Opaque data: bindPassword: <base64_encoded_bind_password> 7.3.4. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.3.5. Sample LDAP CR The following custom resource (CR) shows the parameters and acceptable values for an LDAP identity provider. LDAP CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: "" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 10 url: "ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid" 11 1 This provider name is prefixed to the returned user ID to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 List of attributes to use as the identity. First non-empty attribute is used. At least one attribute is required. If none of the listed attribute have a value, authentication fails. Defined attributes are retrieved as raw, allowing for binary values to be used. 4 List of attributes to use as the email address. First non-empty attribute is used. 5 List of attributes to use as the display name. First non-empty attribute is used. 6 List of attributes to use as the preferred user name when provisioning a user for this identity. First non-empty attribute is used. 7 Optional DN to use to bind during the search phase. Must be set if bindPassword is defined. 8 Optional reference to an OpenShift Container Platform Secret object containing the bind password. Must be set if bindDN is defined. 9 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Only used when insecure is false . 10 When true , no TLS connection is made to the server. When false , ldaps:// URLs connect using TLS, and ldap:// URLs are upgraded to TLS. This must be set to false when ldaps:// URLs are in use, as these URLs always attempt to connect using TLS. 11 An RFC 2255 URL which specifies the LDAP host and search parameters to use. Note To whitelist users for an LDAP integration, use the lookup mapping method. Before a login from LDAP would be allowed, a cluster administrator must create an Identity object and a User object for each LDAP user. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.3.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.4. Configuring a basic authentication identity provider Configure the basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic back-end integration mechanism. 7.4.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.4.2. About basic authentication Basic authentication is a generic back-end integration mechanism that allows users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Because basic authentication is generic, you can use this identity provider for advanced authentication configurations. Important Basic authentication must use an HTTPS connection to the remote server to prevent potential snooping of the user ID and password and man-in-the-middle attacks. With basic authentication configured, users send their user name and password to OpenShift Container Platform, which then validates those credentials against a remote server by making a server-to-server request, passing the credentials as a basic authentication header. This requires users to send their credentials to OpenShift Container Platform during login. Note This only works for user name/password login mechanisms, and OpenShift Container Platform must be able to make network requests to the remote authentication server. User names and passwords are validated against a remote URL that is protected by basic authentication and returns JSON. A 401 response indicates failed authentication. A non- 200 status, or the presence of a non-empty "error" key, indicates an error: {"error":"Error message"} A 200 status with a sub (subject) key indicates success: {"sub":"userid"} 1 1 The subject must be unique to the authenticated user and must not be able to be modified. A successful response can optionally provide additional data, such as: A display name using the name key. For example: {"sub":"userid", "name": "User Name", ...} An email address using the email key. For example: {"sub":"userid", "email":"[email protected]", ...} A preferred user name using the preferred_username key. This is useful when the unique, unchangeable subject is a database key or UID, and a more human-readable name exists. This is used as a hint when provisioning the OpenShift Container Platform user for the authenticated identity. For example: {"sub":"014fbff9a07c", "preferred_username":"bob", ...} 7.4.3. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object that contains the key and certificate by using the following command: USD oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key> 7.4.4. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.4.5. Sample basic authentication CR The following custom resource (CR) shows the parameters and acceptable values for a basic authentication identity provider. Basic authentication CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: basicidp 1 mappingMethod: claim 2 type: BasicAuth basicAuth: url: https://www.example.com/remote-idp 3 ca: 4 name: ca-config-map tlsClientCert: 5 name: client-cert-secret tlsClientKey: 6 name: client-key-secret 1 This provider name is prefixed to the returned user ID to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 URL accepting credentials in Basic authentication headers. 4 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. 5 Optional: Reference to an OpenShift Container Platform Secret object containing the client certificate to present when making requests to the configured URL. 6 Reference to an OpenShift Container Platform Secret object containing the key for the client certificate. Required if tlsClientCert is specified. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.4.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.4.7. Example Apache HTTPD configuration for basic identity providers The basic identify provider (IDP) configuration in OpenShift Container Platform 4 requires that the IDP server respond with JSON for success and failures. You can use CGI scripting in Apache HTTPD to accomplish this. This section provides examples. Example /etc/httpd/conf.d/login.conf Example /var/www/cgi-bin/login.cgi Example /var/www/cgi-bin/fail.cgi 7.4.7.1. File requirements These are the requirements for the files you create on an Apache HTTPD web server: login.cgi and fail.cgi must be executable ( chmod +x ). login.cgi and fail.cgi must have proper SELinux contexts if SELinux is enabled: restorecon -RFv /var/www/cgi-bin , or ensure that the context is httpd_sys_script_exec_t using ls -laZ . login.cgi is only executed if your user successfully logs in per Require and Auth directives. fail.cgi is executed if the user fails to log in, resulting in an HTTP 401 response. 7.4.8. Basic authentication troubleshooting The most common issue relates to network connectivity to the backend server. For simple debugging, run curl commands on the master. To test for a successful login, replace the <user> and <password> in the following example command with valid credentials. To test an invalid login, replace them with false credentials. USD curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp Successful responses A 200 status with a sub (subject) key indicates success: {"sub":"userid"} The subject must be unique to the authenticated user, and must not be able to be modified. A successful response can optionally provide additional data, such as: A display name using the name key: {"sub":"userid", "name": "User Name", ...} An email address using the email key: {"sub":"userid", "email":"[email protected]", ...} A preferred user name using the preferred_username key: {"sub":"014fbff9a07c", "preferred_username":"bob", ...} The preferred_username key is useful when the unique, unchangeable subject is a database key or UID, and a more human-readable name exists. This is used as a hint when provisioning the OpenShift Container Platform user for the authenticated identity. Failed responses A 401 response indicates failed authentication. A non- 200 status or the presence of a non-empty "error" key indicates an error: {"error":"Error message"} 7.5. Configuring a request header identity provider Configure the request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. 7.5.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.5.2. About request header authentication A request header identity provider identifies users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. The request header identity provider cannot be combined with other identity providers that use direct password logins, such as htpasswd, Keystone, LDAP or basic authentication. Note You can also use the request header identity provider for advanced configurations such as the community-supported SAML authentication . Note that this solution is not supported by Red Hat. For users to authenticate using this identity provider, they must access https:// <namespace_route> /oauth/authorize (and subpaths) via an authenticating proxy. To accomplish this, configure the OAuth server to redirect unauthenticated requests for OAuth tokens to the proxy endpoint that proxies to https:// <namespace_route> /oauth/authorize . To redirect unauthenticated requests from clients expecting browser-based login flows: Set the provider.loginURL parameter to the authenticating proxy URL that will authenticate interactive clients and then proxy the request to https:// <namespace_route> /oauth/authorize . To redirect unauthenticated requests from clients expecting WWW-Authenticate challenges: Set the provider.challengeURL parameter to the authenticating proxy URL that will authenticate clients expecting WWW-Authenticate challenges and then proxy the request to https:// <namespace_route> /oauth/authorize . The provider.challengeURL and provider.loginURL parameters can include the following tokens in the query portion of the URL: USD{url} is replaced with the current URL, escaped to be safe in a query parameter. For example: https://www.example.com/sso-login?then=USD{url} USD{query} is replaced with the current query string, unescaped. For example: https://www.example.com/auth-proxy/oauth/authorize?USD{query} Important As of OpenShift Container Platform 4.1, your proxy must support mutual TLS. 7.5.2.1. SSPI connection support on Microsoft Windows Important Using SSPI connection support on Microsoft Windows is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The OpenShift CLI ( oc ) supports the Security Support Provider Interface (SSPI) to allow for SSO flows on Microsft Windows. If you use the request header identity provider with a GSSAPI-enabled proxy to connect an Active Directory server to OpenShift Container Platform, users can automatically authenticate to OpenShift Container Platform by using the oc command line interface from a domain-joined Microsoft Windows computer. 7.5.3. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.5.4. Sample request header CR The following custom resource (CR) shows the parameters and acceptable values for a request header identity provider. Request header CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: requestheaderidp 1 mappingMethod: claim 2 type: RequestHeader requestHeader: challengeURL: "https://www.example.com/challenging-proxy/oauth/authorize?USD{query}" 3 loginURL: "https://www.example.com/login-proxy/oauth/authorize?USD{query}" 4 ca: 5 name: ca-config-map clientCommonNames: 6 - my-auth-proxy headers: 7 - X-Remote-User - SSO-User emailHeaders: 8 - X-Remote-User-Email nameHeaders: 9 - X-Remote-User-Display-Name preferredUsernameHeaders: 10 - X-Remote-User-Login 1 This provider name is prefixed to the user name in the request header to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 Optional: URL to redirect unauthenticated /oauth/authorize requests to, that will authenticate browser-based clients and then proxy their request to https:// <namespace_route> /oauth/authorize . The URL that proxies to https:// <namespace_route> /oauth/authorize must end with /authorize (with no trailing slash), and also proxy subpaths, in order for OAuth approval flows to work properly. USD{url} is replaced with the current URL, escaped to be safe in a query parameter. USD{query} is replaced with the current query string. If this attribute is not defined, then loginURL must be used. 4 Optional: URL to redirect unauthenticated /oauth/authorize requests to, that will authenticate clients which expect WWW-Authenticate challenges, and then proxy them to https:// <namespace_route> /oauth/authorize . USD{url} is replaced with the current URL, escaped to be safe in a query parameter. USD{query} is replaced with the current query string. If this attribute is not defined, then challengeURL must be used. 5 Reference to an OpenShift Container Platform ConfigMap object containing a PEM-encoded certificate bundle. Used as a trust anchor to validate the TLS certificates presented by the remote server. Important As of OpenShift Container Platform 4.1, the ca field is required for this identity provider. This means that your proxy must support mutual TLS. 6 Optional: list of common names ( cn ). If set, a valid client certificate with a Common Name ( cn ) in the specified list must be presented before the request headers are checked for user names. If empty, any Common Name is allowed. Can only be used in combination with ca . 7 Header names to check, in order, for the user identity. The first header containing a value is used as the identity. Required, case-insensitive. 8 Header names to check, in order, for an email address. The first header containing a value is used as the email address. Optional, case-insensitive. 9 Header names to check, in order, for a display name. The first header containing a value is used as the display name. Optional, case-insensitive. 10 Header names to check, in order, for a preferred user name, if different than the immutable identity determined from the headers specified in headers . The first header containing a value is used as the preferred user name when provisioning. Optional, case-insensitive. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.5.5. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.5.6. Example Apache authentication configuration using request header This example configures an Apache authentication proxy for the OpenShift Container Platform using the request header identity provider. Custom proxy configuration Using the mod_auth_gssapi module is a popular way to configure the Apache authentication proxy using the request header identity provider; however, it is not required. Other proxies can easily be used if the following requirements are met: Block the X-Remote-User header from client requests to prevent spoofing. Enforce client certificate authentication in the RequestHeaderIdentityProvider configuration. Require the X-Csrf-Token header be set for all authentication requests using the challenge flow. Make sure only the /oauth/authorize endpoint and its subpaths are proxied; redirects must be rewritten to allow the backend server to send the client to the correct location. The URL that proxies to https://<namespace_route>/oauth/authorize must end with /authorize with no trailing slash. For example, https://proxy.example.com/login-proxy/authorize?... must proxy to https://<namespace_route>/oauth/authorize?... . Subpaths of the URL that proxies to https://<namespace_route>/oauth/authorize must proxy to subpaths of https://<namespace_route>/oauth/authorize . For example, https://proxy.example.com/login-proxy/authorize/approve?... must proxy to https://<namespace_route>/oauth/authorize/approve?... . Note The https://<namespace_route> address is the route to the OAuth server and can be obtained by running oc get route -n openshift-authentication . Configuring Apache authentication using request header This example uses the mod_auth_gssapi module to configure an Apache authentication proxy using the request header identity provider. Prerequisites Obtain the mod_auth_gssapi module from the Optional channel . You must have the following packages installed on your local machine: httpd mod_ssl mod_session apr-util-openssl mod_auth_gssapi Generate a CA for validating requests that submit the trusted header. Define an OpenShift Container Platform ConfigMap object containing the CA. This is done by running: USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1 1 The CA must be stored in the ca.crt key of the ConfigMap object. Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> Generate a client certificate for the proxy. You can generate this certificate by using any x509 certificate tooling. The client certificate must be signed by the CA you generated for validating requests that submit the trusted header. Create the custom resource (CR) for your identity providers. Procedure This proxy uses a client certificate to connect to the OAuth server, which is configured to trust the X-Remote-User header. Create the certificate for the Apache configuration. The certificate that you specify as the SSLProxyMachineCertificateFile parameter value is the proxy's client certificate that is used to authenticate the proxy to the server. It must use TLS Web Client Authentication as the extended key type. Create the Apache configuration. Use the following template to provide your required settings and values: Important Carefully review the template and customize its contents to fit your environment. Note The https://<namespace_route> address is the route to the OAuth server and can be obtained by running oc get route -n openshift-authentication . Update the identityProviders stanza in the custom resource (CR): identityProviders: - name: requestheaderidp type: RequestHeader requestHeader: challengeURL: "https://<namespace_route>/challenging-proxy/oauth/authorize?USD{query}" loginURL: "https://<namespace_route>/login-proxy/oauth/authorize?USD{query}" ca: name: ca-config-map clientCommonNames: - my-auth-proxy headers: - X-Remote-User Verify the configuration. Confirm that you can bypass the proxy by requesting a token by supplying the correct client certificate and header: # curl -L -k -H "X-Remote-User: joe" \ --cert /etc/pki/tls/certs/authproxy.pem \ https://<namespace_route>/oauth/token/request Confirm that requests that do not supply the client certificate fail by requesting a token without the certificate: # curl -L -k -H "X-Remote-User: joe" \ https://<namespace_route>/oauth/token/request Confirm that the challengeURL redirect is active: # curl -k -v -H 'X-Csrf-Token: 1' \ https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token Copy the challengeURL redirect to use in the step. Run this command to show a 401 response with a WWW-Authenticate basic challenge, a negotiate challenge, or both challenges: # curl -k -v -H 'X-Csrf-Token: 1' \ <challengeURL_redirect + query> Test logging in to the OpenShift CLI ( oc ) with and without using a Kerberos ticket: If you generated a Kerberos ticket by using kinit , destroy it: # kdestroy -c cache_name 1 1 Make sure to provide the name of your Kerberos cache. Log in to the oc tool by using your Kerberos credentials: # oc login -u <username> Enter your Kerberos password at the prompt. Log out of the oc tool: # oc logout Use your Kerberos credentials to get a ticket: # kinit Enter your Kerberos user name and password at the prompt. Confirm that you can log in to the oc tool: # oc login If your configuration is correct, you are logged in without entering separate credentials. 7.6. Configuring a GitHub or GitHub Enterprise identity provider Configure the github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. OAuth facilitates a token exchange flow between OpenShift Container Platform and GitHub or GitHub Enterprise. You can use the GitHub integration to connect to either GitHub or GitHub Enterprise. For GitHub Enterprise integrations, you must provide the hostname of your instance and can optionally provide a ca certificate bundle to use in requests to the server. Note The following steps apply to both GitHub and GitHub Enterprise unless noted. 7.6.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.6.2. About GitHub authentication Configuring GitHub authentication allows users to log in to OpenShift Container Platform with their GitHub credentials. To prevent anyone with any GitHub user ID from logging in to your OpenShift Container Platform cluster, you can restrict access to only those in specific GitHub organizations. 7.6.3. Registering a GitHub application To use GitHub or GitHub Enterprise as an identity provider, you must register an application to use. Procedure Register an application on GitHub: For GitHub, click Settings Developer settings OAuth Apps Register a new OAuth application . For GitHub Enterprise, go to your GitHub Enterprise home page and then click Settings Developer settings Register a new application . Enter an application name, for example My OpenShift Install . Enter a homepage URL, such as https://oauth-openshift.apps.<cluster-name>.<cluster-domain> . Optional: Enter an application description. Enter the authorization callback URL, where the end of the URL contains the identity provider name : For example: Click Register application . GitHub provides a client ID and a client secret. You need these values to complete the identity provider configuration. 7.6.4. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object containing a string by using the following command: USD oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret> You can define a Secret object containing the contents of a file by using the following command: USD oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config 7.6.5. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Note This procedure is only required for GitHub Enterprise. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.6.6. Sample GitHub CR The following custom resource (CR) shows the parameters and acceptable values for a GitHub identity provider. GitHub CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: githubidp 1 mappingMethod: claim 2 type: GitHub github: ca: 3 name: ca-config-map clientID: {...} 4 clientSecret: 5 name: github-secret hostname: ... 6 organizations: 7 - myorganization1 - myorganization2 teams: 8 - myorganization1/team-a - myorganization2/team-b 1 This provider name is prefixed to the GitHub numeric user ID to form an identity name. It is also used to build the callback URL. 2 Controls how mappings are established between this provider's identities and User objects. 3 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Only for use in GitHub Enterprise with a non-publicly trusted root certificate. 4 The client ID of a registered GitHub OAuth application . The application must be configured with a callback URL of https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name> . 5 Reference to an OpenShift Container Platform Secret object containing the client secret issued by GitHub. 6 For GitHub Enterprise, you must provide the hostname of your instance, such as example.com . This value must match the GitHub Enterprise hostname value in in the /setup/settings file and cannot include a port number. If this value is not set, then either teams or organizations must be defined. For GitHub, omit this parameter. 7 The list of organizations. Either the organizations or teams field must be set unless the hostname field is set, or if mappingMethod is set to lookup . Cannot be used in combination with the teams field. 8 The list of teams. Either the teams or organizations field must be set unless the hostname field is set, or if mappingMethod is set to lookup . Cannot be used in combination with the organizations field. Note If organizations or teams is specified, only GitHub users that are members of at least one of the listed organizations will be allowed to log in. If the GitHub OAuth application configured in clientID is not owned by the organization, an organization owner must grant third-party access to use this option. This can be done during the first GitHub login by the organization's administrator, or from the GitHub organization settings. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.6.7. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Obtain a token from the OAuth server. As long as the kubeadmin user has been removed, the oc login command provides instructions on how to access a web page where you can retrieve the token. You can also access this page from the web console by navigating to (?) Help Command Line Tools Copy Login Command . Log in to the cluster, passing in the token to authenticate. USD oc login --token=<token> Note This identity provider does not support logging in with a user name and password. Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.7. Configuring a GitLab identity provider Configure the gitlab identity provider using GitLab.com or any other GitLab instance as an identity provider. 7.7.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.7.2. About GitLab authentication Configuring GitLab authentication allows users to log in to OpenShift Container Platform with their GitLab credentials. If you use GitLab version 7.7.0 to 11.0, you connect using the OAuth integration . If you use GitLab version 11.1 or later, you can use OpenID Connect (OIDC) to connect instead of OAuth. 7.7.3. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object containing a string by using the following command: USD oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret> You can define a Secret object containing the contents of a file by using the following command: USD oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config 7.7.4. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Note This procedure is only required for GitHub Enterprise. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.7.5. Sample GitLab CR The following custom resource (CR) shows the parameters and acceptable values for a GitLab identity provider. GitLab CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: gitlabidp 1 mappingMethod: claim 2 type: GitLab gitlab: clientID: {...} 3 clientSecret: 4 name: gitlab-secret url: https://gitlab.com 5 ca: 6 name: ca-config-map 1 This provider name is prefixed to the GitLab numeric user ID to form an identity name. It is also used to build the callback URL. 2 Controls how mappings are established between this provider's identities and User objects. 3 The client ID of a registered GitLab OAuth application . The application must be configured with a callback URL of https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name> . 4 Reference to an OpenShift Container Platform Secret object containing the client secret issued by GitLab. 5 The host URL of a GitLab provider. This could either be https://gitlab.com/ or any other self hosted instance of GitLab. 6 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.7.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.8. Configuring a Google identity provider Configure the google identity provider using the Google OpenID Connect integration . 7.8.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.8.2. About Google authentication Using Google as an identity provider allows any Google user to authenticate to your server. You can limit authentication to members of a specific hosted domain with the hostedDomain configuration attribute. Note Using Google as an identity provider requires users to get a token using <namespace_route>/oauth/token/request to use with command-line tools. 7.8.3. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object containing a string by using the following command: USD oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret> You can define a Secret object containing the contents of a file by using the following command: USD oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config 7.8.4. Sample Google CR The following custom resource (CR) shows the parameters and acceptable values for a Google identity provider. Google CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: googleidp 1 mappingMethod: claim 2 type: Google google: clientID: {...} 3 clientSecret: 4 name: google-secret hostedDomain: "example.com" 5 1 This provider name is prefixed to the Google numeric user ID to form an identity name. It is also used to build the redirect URL. 2 Controls how mappings are established between this provider's identities and User objects. 3 The client ID of a registered Google project . The project must be configured with a redirect URI of https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name> . 4 Reference to an OpenShift Container Platform Secret object containing the client secret issued by Google. 5 A hosted domain used to restrict sign-in accounts. Optional if the lookup mappingMethod is used. If empty, any Google account is allowed to authenticate. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.8.5. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Obtain a token from the OAuth server. As long as the kubeadmin user has been removed, the oc login command provides instructions on how to access a web page where you can retrieve the token. You can also access this page from the web console by navigating to (?) Help Command Line Tools Copy Login Command . Log in to the cluster, passing in the token to authenticate. USD oc login --token=<token> Note This identity provider does not support logging in with a user name and password. Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.9. Configuring an OpenID Connect identity provider Configure the oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . 7.9.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.9.2. About OpenID Connect authentication The Authentication Operator in OpenShift Container Platform requires that the configured OpenID Connect identity provider implements the OpenID Connect Discovery specification. Note ID Token and UserInfo decryptions are not supported. By default, the openid scope is requested. If required, extra scopes can be specified in the extraScopes field. Claims are read from the JWT id_token returned from the OpenID identity provider and, if specified, from the JSON returned by the UserInfo URL. At least one claim must be configured to use as the user's identity. The standard identity claim is sub . You can also indicate which claims to use as the user's preferred user name, display name, and email address. If multiple claims are specified, the first one with a non-empty value is used. The following table lists the standard claims: Claim Description sub Short for "subject identifier." The remote identity for the user at the issuer. preferred_username The preferred user name when provisioning a user. A shorthand name that the user wants to be referred to as, such as janedoe . Typically a value that corresponding to the user's login or username in the authentication system, such as username or email. email Email address. name Display name. See the OpenID claims documentation for more information. Note Unless your OpenID Connect identity provider supports the resource owner password credentials (ROPC) grant flow, users must get a token from <namespace_route>/oauth/token/request to use with command-line tools. 7.9.3. Supported OIDC providers Red Hat tests and supports specific OpenID Connect (OIDC) providers with OpenShift Container Platform. The following OpenID Connect (OIDC) providers are tested and supported with OpenShift Container Platform. Using an OIDC provider that is not on the following list might work with OpenShift Container Platform, but the provider was not tested by Red Hat and therefore is not supported by Red Hat. Active Directory Federation Services for Windows Server Note Currently, it is not supported to use Active Directory Federation Services for Windows Server with OpenShift Container Platform when custom claims are used. GitLab Google Keycloak Microsoft identity platform (Azure Active Directory v2.0) Note Currently, it is not supported to use Microsoft identity platform when group names are required to be synced. Okta Ping Identity Red Hat Single Sign-On 7.9.4. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object containing a string by using the following command: USD oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret> You can define a Secret object containing the contents of a file by using the following command: USD oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config 7.9.5. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Note This procedure is only required for GitHub Enterprise. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.9.6. Sample OpenID Connect CRs The following custom resources (CRs) show the parameters and acceptable values for an OpenID Connect identity provider. If you must specify a custom certificate bundle, extra scopes, extra authorization request parameters, or a userInfo URL, use the full OpenID Connect CR. Standard OpenID Connect CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp 1 mappingMethod: claim 2 type: OpenID openID: clientID: ... 3 clientSecret: 4 name: idp-secret claims: 5 preferredUsername: - preferred_username name: - name email: - email groups: - groups issuer: https://www.idp-issuer.com 6 1 This provider name is prefixed to the value of the identity claim to form an identity name. It is also used to build the redirect URL. 2 Controls how mappings are established between this provider's identities and User objects. 3 The client ID of a client registered with the OpenID provider. The client must be allowed to redirect to https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name> . 4 A reference to an OpenShift Container Platform Secret object containing the client secret. 5 The list of claims to use as the identity. The first non-empty claim is used. 6 The Issuer Identifier described in the OpenID spec. Must use https without query or fragment component. Full OpenID Connect CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: ... clientSecret: name: idp-secret ca: 1 name: ca-config-map extraScopes: 2 - email - profile extraAuthorizeParameters: 3 include_granted_scopes: "true" claims: preferredUsername: 4 - preferred_username - email name: 5 - nickname - given_name - name email: 6 - custom_email_claim - email groups: 7 - groups issuer: https://www.idp-issuer.com 1 Optional: Reference to an OpenShift Container Platform config map containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. 2 Optional: The list of scopes to request, in addition to the openid scope, during the authorization token request. 3 Optional: A map of extra parameters to add to the authorization token request. 4 The list of claims to use as the preferred user name when provisioning a user for this identity. The first non-empty claim is used. 5 The list of claims to use as the display name. The first non-empty claim is used. 6 The list of claims to use as the email address. The first non-empty claim is used. 7 The list of claims to use to synchronize groups from the OpenID Connect provider to OpenShift Container Platform upon user login. The first non-empty claim is used. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.9.7. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Obtain a token from the OAuth server. As long as the kubeadmin user has been removed, the oc login command provides instructions on how to access a web page where you can retrieve the token. You can also access this page from the web console by navigating to (?) Help Command Line Tools Copy Login Command . Log in to the cluster, passing in the token to authenticate. USD oc login --token=<token> Note If your OpenID Connect identity provider supports the resource owner password credentials (ROPC) grant flow, you can log in with a user name and password. You might need to take steps to enable the ROPC grant flow for your identity provider. After the OIDC identity provider is configured in OpenShift Container Platform, you can log in by using the following command, which prompts for your user name and password: USD oc login -u <identity_provider_username> --server=<api_server_url_and_port> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.9.8. Configuring identity providers using the web console Configure your identity provider (IDP) through the web console instead of the CLI. Prerequisites You must be logged in to the web console as a cluster administrator. Procedure Navigate to Administration Cluster Settings . Under the Configuration tab, click OAuth . Under the Identity Providers section, select your identity provider from the Add drop-down menu. Note You can specify multiple IDPs through the web console without overwriting existing IDPs.
[ "htpasswd -c -B -b </path/to/users.htpasswd> <username> <password>", "htpasswd -c -B -b users.htpasswd <username> <password>", "Adding password for user user1", "htpasswd -B -b </path/to/users.htpasswd> <user_name> <password>", "> htpasswd.exe -c -B -b <\\path\\to\\users.htpasswd> <username> <password>", "> htpasswd.exe -c -B -b users.htpasswd <username> <password>", "Adding password for user user1", "> htpasswd.exe -b <\\path\\to\\users.htpasswd> <username> <password>", "oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1", "apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd", "htpasswd -bB users.htpasswd <username> <password>", "Adding password for user <username>", "htpasswd -D users.htpasswd <username>", "Deleting password for user <username>", "oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f -", "apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>", "oc delete user <username>", "user.user.openshift.io \"<username>\" deleted", "oc delete identity my_htpasswd_provider:<username>", "identity.user.openshift.io \"my_htpasswd_provider:<username>\" deleted", "oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: keystoneidp 1 mappingMethod: claim 2 type: Keystone keystone: domainName: default 3 url: https://keystone.example.com:5000 4 ca: 5 name: ca-config-map tlsClientCert: 6 name: client-cert-secret tlsClientKey: 7 name: client-key-secret", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "ldap://host:port/basedn?attribute?scope?filter", "(&(<filter>)(<attribute>=<username>))", "ldap://ldap.example.com/o=Acme?cn?sub?(enabled=true)", "oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1", "apiVersion: v1 kind: Secret metadata: name: ldap-secret namespace: openshift-config type: Opaque data: bindPassword: <base64_encoded_bind_password>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: \"\" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 10 url: \"ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid\" 11", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "{\"error\":\"Error message\"}", "{\"sub\":\"userid\"} 1", "{\"sub\":\"userid\", \"name\": \"User Name\", ...}", "{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}", "{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}", "oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: basicidp 1 mappingMethod: claim 2 type: BasicAuth basicAuth: url: https://www.example.com/remote-idp 3 ca: 4 name: ca-config-map tlsClientCert: 5 name: client-cert-secret tlsClientKey: 6 name: client-key-secret", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "<VirtualHost *:443> # CGI Scripts in here DocumentRoot /var/www/cgi-bin # SSL Directives SSLEngine on SSLCipherSuite PROFILE=SYSTEM SSLProxyCipherSuite PROFILE=SYSTEM SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Configure HTTPD to execute scripts ScriptAlias /basic /var/www/cgi-bin # Handles a failed login attempt ErrorDocument 401 /basic/fail.cgi # Handles authentication <Location /basic/login.cgi> AuthType Basic AuthName \"Please Log In\" AuthBasicProvider file AuthUserFile /etc/httpd/conf/passwords Require valid-user </Location> </VirtualHost>", "#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"sub\":\"userid\", \"name\":\"'USDREMOTE_USER'\"}' exit 0", "#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"error\": \"Login failure\"}' exit 0", "curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp", "{\"sub\":\"userid\"}", "{\"sub\":\"userid\", \"name\": \"User Name\", ...}", "{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}", "{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: requestheaderidp 1 mappingMethod: claim 2 type: RequestHeader requestHeader: challengeURL: \"https://www.example.com/challenging-proxy/oauth/authorize?USD{query}\" 3 loginURL: \"https://www.example.com/login-proxy/oauth/authorize?USD{query}\" 4 ca: 5 name: ca-config-map clientCommonNames: 6 - my-auth-proxy headers: 7 - X-Remote-User - SSO-User emailHeaders: 8 - X-Remote-User-Email nameHeaders: 9 - X-Remote-User-Display-Name preferredUsernameHeaders: 10 - X-Remote-User-Login", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "LoadModule request_module modules/mod_request.so LoadModule auth_gssapi_module modules/mod_auth_gssapi.so Some Apache configurations might require these modules. LoadModule auth_form_module modules/mod_auth_form.so LoadModule session_module modules/mod_session.so Nothing needs to be served over HTTP. This virtual host simply redirects to HTTPS. <VirtualHost *:80> DocumentRoot /var/www/html RewriteEngine On RewriteRule ^(.*)USD https://%{HTTP_HOST}USD1 [R,L] </VirtualHost> <VirtualHost *:443> # This needs to match the certificates you generated. See the CN and X509v3 # Subject Alternative Name in the output of: # openssl x509 -text -in /etc/pki/tls/certs/localhost.crt ServerName www.example.com DocumentRoot /var/www/html SSLEngine on SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key SSLCACertificateFile /etc/pki/CA/certs/ca.crt SSLProxyEngine on SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt # It is critical to enforce client certificates. Otherwise, requests can # spoof the X-Remote-User header by accessing the /oauth/authorize endpoint # directly. SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem # To use the challenging-proxy, an X-Csrf-Token must be present. RewriteCond %{REQUEST_URI} ^/challenging-proxy RewriteCond %{HTTP:X-Csrf-Token} ^USD [NC] RewriteRule ^.* - [F,L] <Location /challenging-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" # For Kerberos AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On # For ldap: # AuthBasicProvider ldap # AuthLDAPURL \"ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)\" </Location> <Location /login-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s env=REMOTE_USER GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On ErrorDocument 401 /login.html </Location> </VirtualHost> RequestHeader unset X-Remote-User", "identityProviders: - name: requestheaderidp type: RequestHeader requestHeader: challengeURL: \"https://<namespace_route>/challenging-proxy/oauth/authorize?USD{query}\" loginURL: \"https://<namespace_route>/login-proxy/oauth/authorize?USD{query}\" ca: name: ca-config-map clientCommonNames: - my-auth-proxy headers: - X-Remote-User", "curl -L -k -H \"X-Remote-User: joe\" --cert /etc/pki/tls/certs/authproxy.pem https://<namespace_route>/oauth/token/request", "curl -L -k -H \"X-Remote-User: joe\" https://<namespace_route>/oauth/token/request", "curl -k -v -H 'X-Csrf-Token: 1' https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token", "curl -k -v -H 'X-Csrf-Token: 1' <challengeURL_redirect + query>", "kdestroy -c cache_name 1", "oc login -u <username>", "oc logout", "kinit", "oc login", "https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>", "https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: githubidp 1 mappingMethod: claim 2 type: GitHub github: ca: 3 name: ca-config-map clientID: {...} 4 clientSecret: 5 name: github-secret hostname: ... 6 organizations: 7 - myorganization1 - myorganization2 teams: 8 - myorganization1/team-a - myorganization2/team-b", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: gitlabidp 1 mappingMethod: claim 2 type: GitLab gitlab: clientID: {...} 3 clientSecret: 4 name: gitlab-secret url: https://gitlab.com 5 ca: 6 name: ca-config-map", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: googleidp 1 mappingMethod: claim 2 type: Google google: clientID: {...} 3 clientSecret: 4 name: google-secret hostedDomain: \"example.com\" 5", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp 1 mappingMethod: claim 2 type: OpenID openID: clientID: ... 3 clientSecret: 4 name: idp-secret claims: 5 preferredUsername: - preferred_username name: - name email: - email groups: - groups issuer: https://www.idp-issuer.com 6", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: clientSecret: name: idp-secret ca: 1 name: ca-config-map extraScopes: 2 - email - profile extraAuthorizeParameters: 3 include_granted_scopes: \"true\" claims: preferredUsername: 4 - preferred_username - email name: 5 - nickname - given_name - name email: 6 - custom_email_claim - email groups: 7 - groups issuer: https://www.idp-issuer.com", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc login -u <identity_provider_username> --server=<api_server_url_and_port>", "oc whoami" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/configuring-identity-providers
Chapter 42. Getting started with nftables
Chapter 42. Getting started with nftables The nftables framework classifies packets and it is the successor to the iptables , ip6tables , arptables , ebtables , and ipset utilities. It offers numerous improvements in convenience, features, and performance over packet-filtering tools, most notably: Built-in lookup tables instead of linear processing A single framework for both the IPv4 and IPv6 protocols All rules applied atomically instead of fetching, updating, and storing a complete rule set Support for debugging and tracing in the rule set ( nftrace ) and monitoring trace events (in the nft tool) More consistent and compact syntax, no protocol-specific extensions A Netlink API for third-party applications The nftables framework uses tables to store chains. The chains contain individual rules for performing actions. The nft utility replaces all tools from the packet-filtering frameworks. You can use the libnftnl library for low-level interaction with nftables Netlink API through the libmnl library. To display the effect of rule set changes, use the nft list ruleset command. Because these utilities add tables, chains, rules, sets, and other objects to the nftables rule set, be aware that nftables rule-set operations, such as the nft flush ruleset command, might affect rule sets installed using the iptables command. 42.1. Creating and managing nftables tables, chains, and rules You can display nftables rule sets and manage them. 42.1.1. Basics of nftables tables A table in nftables is a namespace that contains a collection of chains, rules, sets, and other objects. Each table must have an address family assigned. The address family defines the packet types that this table processes. You can set one of the following address families when you create a table: ip : Matches only IPv4 packets. This is the default if you do not specify an address family. ip6 : Matches only IPv6 packets. inet : Matches both IPv4 and IPv6 packets. arp : Matches IPv4 address resolution protocol (ARP) packets. bridge : Matches packets that pass through a bridge device. netdev : Matches packets from ingress. If you want to add a table, the format to use depends on your firewall script: In scripts in native syntax, use: In shell scripts, use: 42.1.2. Basics of nftables chains Tables consist of chains which in turn are containers for rules. The following two rule types exists: Base chain : You can use base chains as an entry point for packets from the networking stack. Regular chain : You can use regular chains as a jump target to better organize rules. If you want to add a base chain to a table, the format to use depends on your firewall script: In scripts in native syntax, use: In shell scripts, use: To avoid that the shell interprets the semicolons as the end of the command, place the \ escape character in front of the semicolons. Both examples create base chains . To create a regular chain , do not set any parameters in the curly brackets. Chain types The following are the chain types and an overview with which address families and hooks you can use them: Type Address families Hooks Description filter all all Standard chain type nat ip , ip6 , inet prerouting , input , output , postrouting Chains of this type perform native address translation based on connection tracking entries. Only the first packet traverses this chain type. route ip , ip6 output Accepted packets that traverse this chain type cause a new route lookup if relevant parts of the IP header have changed. Chain priorities The priority parameter specifies the order in which packets traverse chains with the same hook value. You can set this parameter to an integer value or use a standard priority name. The following matrix is an overview of the standard priority names and their numeric values, and with which address families and hooks you can use them: Textual value Numeric value Address families Hooks raw -300 ip , ip6 , inet all mangle -150 ip , ip6 , inet all dstnat -100 ip , ip6 , inet prerouting -300 bridge prerouting filter 0 ip , ip6 , inet , arp , netdev all -200 bridge all security 50 ip , ip6 , inet all srcnat 100 ip , ip6 , inet postrouting 300 bridge postrouting out 100 bridge output Chain policies The chain policy defines whether nftables should accept or drop packets if rules in this chain do not specify any action. You can set one of the following policies in a chain: accept (default) drop 42.1.3. Basics of nftables rules Rules define actions to perform on packets that pass a chain that contains this rule. If the rule also contains matching expressions, nftables performs the actions only if all expressions apply. If you want to add a rule to a chain, the format to use depends on your firewall script: In scripts in native syntax, use: In shell scripts, use: This shell command appends the new rule at the end of the chain. If you prefer to add a rule at the beginning of the chain, use the nft insert command instead of nft add . 42.1.4. Managing tables, chains, and rules using nft commands To manage an nftables firewall on the command line or in shell scripts, use the nft utility. Important The commands in this procedure do not represent a typical workflow and are not optimized. This procedure only demonstrates how to use nft commands to manage tables, chains, and rules in general. Procedure Create a table named nftables_svc with the inet address family so that the table can process both IPv4 and IPv6 packets: Add a base chain named INPUT , that processes incoming network traffic, to the inet nftables_svc table: To avoid that the shell interprets the semicolons as the end of the command, escape the semicolons using the \ character. Add rules to the INPUT chain. For example, allow incoming TCP traffic on port 22 and 443, and, as the last rule of the INPUT chain, reject other incoming traffic with an Internet Control Message Protocol (ICMP) port unreachable message: If you enter the nft add rule commands as shown, nft adds the rules in the same order to the chain as you run the commands. Display the current rule set including handles: Insert a rule before the existing rule with handle 3. For example, to insert a rule that allows TCP traffic on port 636, enter: Append a rule after the existing rule with handle 3. For example, to insert a rule that allows TCP traffic on port 80, enter: Display the rule set again with handles. Verify that the later added rules have been added to the specified positions: Remove the rule with handle 6: To remove a rule, you must specify the handle. Display the rule set, and verify that the removed rule is no longer present: Remove all remaining rules from the INPUT chain: Display the rule set, and verify that the INPUT chain is empty: Delete the INPUT chain: You can also use this command to delete chains that still contain rules. Display the rule set, and verify that the INPUT chain has been deleted: Delete the nftables_svc table: You can also use this command to delete tables that still contain chains. Note To delete the entire rule set, use the nft flush ruleset command instead of manually deleting all rules, chains, and tables in separate commands. Additional resources nft(8) man page on your system 42.2. Migrating from iptables to nftables If your firewall configuration still uses iptables rules, you can migrate your iptables rules to nftables . 42.2.1. When to use firewalld, nftables, or iptables The following is a brief overview in which scenario you should use one of the following utilities: firewalld : Use the firewalld utility for simple firewall use cases. The utility is easy to use and covers the typical use cases for these scenarios. nftables : Use the nftables utility to set up complex and performance-critical firewalls, such as for a whole network. iptables : The iptables utility on Red Hat Enterprise Linux uses the nf_tables kernel API instead of the legacy back end. The nf_tables API provides backward compatibility so that scripts that use iptables commands still work on Red Hat Enterprise Linux. For new firewall scripts, Red Hat recommends to use nftables . Important To prevent the different firewall-related services ( firewalld , nftables , or iptables ) from influencing each other, run only one of them on a RHEL host, and disable the other services. 42.2.2. Concepts in the nftables framework Compared to the iptables framework, nftables offers a more modern, efficient, and flexible alternative. There are several concepts and features that provide advanced capabilities and improvements over iptables . These enhancements simplify the rule management and improve performance to make nftables a modern alternative for complex and high-performance networking environments. The nftables framework contains the following components: Tables and namespaces In nftables , tables represent organizational units or namespaces that group together related firewall chains, sets, flowtables, and other objects. In nftables , tables provide a more flexible way to structure firewall rules and related components. While in iptables , the tables were more rigidly defined with specific purposes. Table families Each table in nftables is associated with a specific family ( ip , ip6 , inet , arp , bridge , or netdev ). This association determines which packets the table can process. For example, a table in the ip family handles only IPv4 packets. On the other hand, inet is a special case of table family. It offers a unified approach across protocols, because it can process both IPv4 and IPv6 packets. Another case of a special table family is netdev , because it is used for rules that apply directly to network devices, enabling filtering at the device level. Base chains Base chains in nftables are highly configurable entry-points in the packet processing pipeline that enable users to specify the following: Type of chain, for example "filter" The hook point in the packet processing path, for example "input", "output", "forward" Priority of the chain This flexibility enables precise control over when and how the rules are applied to packets as they pass through the network stack. A special case of a chain is the route chain, which is used to influence the routing decisions made by the kernel, based on packet headers. Virtual machine for rule processing The nftables framework uses an internal virtual machine to process rules. This virtual machine executes instructions that are similar to assembly language operations (loading data into registers, performing comparisons, and so on). Such a mechanism allows for highly flexible and efficient rule processing. Enhancements in nftables can be introduced as new instructions for that virtual machine. This typically requires a new kernel module and updates to the libnftnl library and the nft command-line utility. Alternatively, you can introduce new features by combining existing instructions in innovative ways without a need for kernel modifications. The syntax of nftables rules reflects the flexibility of the underlying virtual machine. For example, the rule meta mark set tcp dport map { 22: 1, 80: 2 } sets a packet's firewall mark to 1 if the TCP destination port is 22, and to 2 if the port is 80. This demonstrates how complex logic can be expressed concisely. Lessons learned and enhancements The nftables framework integrates and extends the functionality of the ipset utility, which is used in iptables for bulk matching on IP addresses, ports, other data types and, most importantly, combinations thereof. This integration makes it easier to manage large and dynamic sets of data directly within nftables . , nftables natively supports matching packets based on multiple values or ranges for any data type, which enhances its capability to handle complex filtering requirements. With nftables you can manipulate any field within a packet. In nftables , sets can be either named or anonymous. The named sets can be referenced by multiple rules and modified dynamically. The anonymous sets are defined inline within a rule and are immutable. Sets can contain elements that are combinations of different types, for example IP address and port number pairs. This feature provides greater flexibility in matching complex criteria. To manage sets, the kernel can select the most appropriate backend based on the specific requirements (performance, memory efficiency, and others). Sets can also function as maps with key-value pairs. The value part can be used as data points (values to write into packet headers), or as verdicts or chains to jump to. This enables complex and dynamic rule behaviors, known as "verdict maps". Flexible rule format The structure of nftables rules is straightforward. The conditions and actions are applied sequentially from left to right. This intuitive format simplifies rule creating and troubleshooting. Conditions in a rule are logically connected (with the AND operator) together, which means that all conditions must be evaluated as "true" for the rule to match. If any condition fails, the evaluation moves to the rule. Actions in nftables can be final, such as drop or accept , which stop further rule processing for the packet. Non-terminal actions, such as counter log meta mark set 0x3 , perform specific tasks (counting packets, logging, setting a mark, and others), but allow subsequent rules to be evaluated. Additional resources nft(8) man page What comes after iptables? Its successor, of course: nftables Firewalld: The Future is nftables 42.2.3. Concepts in the deprecated iptables framework Similar to the actively-maintained nftables framework, the deprecated iptables framework enables you to perform a variety of packet filtering tasks, logging and auditing, NAT-related configuration tasks, and more. The iptables framework is structured into multiple tables, where each table is designed for a specific purpose: filter The default table, ensures general packet filtering nat For Network Address Translation (NAT), includes altering the source and destination addresses of packets mangle For specific packet alteration, enables you to do modification of packet headers for advanced routing decisions raw For configurations that need to happen before connection tracking These tables are implemented as separate kernel modules, where each table offers a fixed set of builtin chains such as INPUT , OUTPUT , and FORWARD . A chain is a sequence of rules that packets are evaluated against. These chains hook into specific points in the packet processing flow in the kernel. The chains have the same names across different tables, however their order of execution is determined by their respective hook priorities. The priorities are managed internally by the kernel to make sure that the rules are applied in the correct sequence. Originally, iptables was designed to process IPv4 traffic. However, with the inception of the IPv6 protocol, the ip6tables utility needed to be introduced to provide comparable functionality (as iptables ) and enable users to create and manage firewall rules for IPv6 packets. With the same logic, the arptables utility was created to process Address Resolution Protocol (ARP) and the ebtables utility was developed to handle Ethernet bridging frames. These tools ensure that you can apply the packet filtering abilities of iptables across various network protocols and provide comprehensive network coverage. To enhance the functionality of iptables , the extensions started to be developed. The functionality extensions are typically implemented as kernel modules that are paired with user-space dynamic shared objects (DSOs). The extensions introduce "matches" and "targets" that you can use in firewall rules to perform more sophisticated operations. Extensions can enable complex matches and targets. For instance you can match on, or manipulate specific layer 4 protocol header values, perform rate-limiting, enforce quotas, and so on. Some extensions are designed to address limitations in the default iptables syntax, for example the "multiport" match extension. This extension allows a single rule to match multiple, non-consecutive ports to simplify rule definitions, and thereby reducing the number of individual rules required. An ipset is a special kind of functionality extension to iptables . It is a kernel-level data structure that is used together with iptables to create collections of IP addresses, port numbers, and other network-related elements that you can match against packets. These sets significantly streamline, optimize, and accelerate the process of writing and managing firewall rules. Additional resources iptables(8) man page 42.2.4. Converting iptables and ip6tables rule sets to nftables Use the iptables-restore-translate and ip6tables-restore-translate utilities to translate iptables and ip6tables rule sets to nftables . Prerequisites The nftables and iptables packages are installed. The system has iptables and ip6tables rules configured. Procedure Write the iptables and ip6tables rules to a file: Convert the dump files to nftables instructions: Review and, if needed, manually update the generated nftables rules. To enable the nftables service to load the generated files, add the following to the /etc/sysconfig/nftables.conf file: Stop and disable the iptables service: If you used a custom script to load the iptables rules, ensure that the script no longer starts automatically and reboot to flush all tables. Enable and start the nftables service: Verification Display the nftables rule set: Additional resources Automatically loading nftables rules when the system boots 42.2.5. Converting single iptables and ip6tables rules to nftables Red Hat Enterprise Linux provides the iptables-translate and ip6tables-translate utilities to convert an iptables or ip6tables rule into the equivalent one for nftables . Prerequisites The nftables package is installed. Procedure Use the iptables-translate or ip6tables-translate utility instead of iptables or ip6tables to display the corresponding nftables rule, for example: Note that some extensions lack translation support. In these cases, the utility prints the untranslated rule prefixed with the # sign, for example: Additional resources iptables-translate --help 42.2.6. Comparison of common iptables and nftables commands The following is a comparison of common iptables and nftables commands: Listing all rules: iptables nftables iptables-save nft list ruleset Listing a certain table and chain: iptables nftables iptables -L nft list table ip filter iptables -L INPUT nft list chain ip filter INPUT iptables -t nat -L PREROUTING nft list chain ip nat PREROUTING The nft command does not pre-create tables and chains. They exist only if a user created them manually. Listing rules generated by firewalld: 42.3. Writing and executing nftables scripts The major benefit of using the nftables framework is that the execution of scripts is atomic. This means that the system either applies the whole script or prevents the execution if an error occurs. This guarantees that the firewall is always in a consistent state. Additionally, with the nftables script environment, you can: Add comments Define variables Include other rule-set files When you install the nftables package, Red Hat Enterprise Linux automatically creates *.nft scripts in the /etc/nftables/ directory. These scripts contain commands that create tables and empty chains for different purposes. 42.3.1. Supported nftables script formats You can write scripts in the nftables scripting environment in the following formats: The same format as the nft list ruleset command displays the rule set: #!/usr/sbin/nft -f # Flush the rule set flush ruleset table inet example_table { chain example_chain { # Chain for incoming packets that drops all packets that # are not explicitly allowed by any rule in this chain type filter hook input priority 0; policy drop; # Accept connections to port 22 (ssh) tcp dport ssh accept } } The same syntax as for nft commands: #!/usr/sbin/nft -f # Flush the rule set flush ruleset # Create a table add table inet example_table # Create a chain for incoming packets that drops all packets # that are not explicitly allowed by any rule in this chain add chain inet example_table example_chain { type filter hook input priority 0 ; policy drop ; } # Add a rule that accepts connections to port 22 (ssh) add rule inet example_table example_chain tcp dport ssh accept 42.3.2. Running nftables scripts You can run an nftables script either by passing it to the nft utility or by executing the script directly. Procedure To run an nftables script by passing it to the nft utility, enter: To run an nftables script directly: For the single time that you perform this: Ensure that the script starts with the following shebang sequence: #!/usr/sbin/nft -f Important If you omit the -f parameter, the nft utility does not read the script and displays: Error: syntax error, unexpected newline, expecting string . Optional: Set the owner of the script to root : Make the script executable for the owner: Run the script: If no output is displayed, the system executed the script successfully. Important Even if nft executes the script successfully, incorrectly placed rules, missing parameters, or other problems in the script can cause that the firewall behaves not as expected. Additional resources chown(1) and chmod(1) man pages on your system Automatically loading nftables rules when the system boots 42.3.3. Using comments in nftables scripts The nftables scripting environment interprets everything to the right of a # character to the end of a line as a comment. Comments can start at the beginning of a line, or to a command: ... # Flush the rule set flush ruleset add table inet example_table # Create a table ... 42.3.4. Using variables in nftables script To define a variable in an nftables script, use the define keyword. You can store single values and anonymous sets in a variable. For more complex scenarios, use sets or verdict maps. Variables with a single value The following example defines a variable named INET_DEV with the value enp1s0 : You can use the variable in the script by entering the USD sign followed by the variable name: Variables that contain an anonymous set The following example defines a variable that contains an anonymous set: You can use the variable in the script by writing the USD sign followed by the variable name: Note Curly braces have special semantics when you use them in a rule because they indicate that the variable represents a set. Additional resources Using sets in nftables commands Using verdict maps in nftables commands 42.3.5. Including files in nftables scripts In the nftables scripting environment, you can include other scripts by using the include statement. If you specify only a file name without an absolute or relative path, nftables includes files from the default search path, which is set to /etc on Red Hat Enterprise Linux. Example 42.1. Including files from the default search directory To include a file from the default search directory: Example 42.2. Including all *.nft files from a directory To include all files ending with *.nft that are stored in the /etc/nftables/rulesets/ directory: Note that the include statement does not match files beginning with a dot. Additional resources The Include files section in the nft(8) man page on your system 42.3.6. Automatically loading nftables rules when the system boots The nftables systemd service loads firewall scripts that are included in the /etc/sysconfig/nftables.conf file. Prerequisites The nftables scripts are stored in the /etc/nftables/ directory. Procedure Edit the /etc/sysconfig/nftables.conf file. If you modified the *.nft scripts that were created in /etc/nftables/ with the installation of the nftables package, uncomment the include statement for these scripts. If you wrote new scripts, add include statements to include these scripts. For example, to load the /etc/nftables/ example .nft script when the nftables service starts, add: Optional: Start the nftables service to load the firewall rules without rebooting the system: Enable the nftables service. Additional resources Supported nftables script formats 42.4. Configuring NAT using nftables With nftables , you can configure the following network address translation (NAT) types: Masquerading Source NAT (SNAT) Destination NAT (DNAT) Redirect Important You can only use real interface names in iifname and oifname parameters, and alternative names ( altname ) are not supported. 42.4.1. NAT types These are the different network address translation (NAT) types: Masquerading and source NAT (SNAT) Use one of these NAT types to change the source IP address of packets. For example, Internet Service Providers (ISPs) do not route private IP ranges, such as 10.0.0.0/8 . If you use private IP ranges in your network and users should be able to reach servers on the internet, map the source IP address of packets from these ranges to a public IP address. Masquerading and SNAT are very similar to one another. The differences are: Masquerading automatically uses the IP address of the outgoing interface. Therefore, use masquerading if the outgoing interface uses a dynamic IP address. SNAT sets the source IP address of packets to a specified IP and does not dynamically look up the IP of the outgoing interface. Therefore, SNAT is faster than masquerading. Use SNAT if the outgoing interface uses a fixed IP address. Destination NAT (DNAT) Use this NAT type to rewrite the destination address and port of incoming packets. For example, if your web server uses an IP address from a private IP range and is, therefore, not directly accessible from the internet, you can set a DNAT rule on the router to redirect incoming traffic to this server. Redirect This type is a special case of DNAT that redirects packets to the local machine depending on the chain hook. For example, if a service runs on a different port than its standard port, you can redirect incoming traffic from the standard port to this specific port. 42.4.2. Configuring masquerading using nftables Masquerading enables a router to dynamically change the source IP of packets sent through an interface to the IP address of the interface. This means that if the interface gets a new IP assigned, nftables automatically uses the new IP when replacing the source IP. Replace the source IP of packets leaving the host through the ens3 interface to the IP set on ens3 . Procedure Create a table: Add the prerouting and postrouting chains to the table: Important Even if you do not add a rule to the prerouting chain, the nftables framework requires this chain to match incoming packet replies. Note that you must pass the -- option to the nft command to prevent the shell from interpreting the negative priority value as an option of the nft command. Add a rule to the postrouting chain that matches outgoing packets on the ens3 interface: 42.4.3. Configuring source NAT using nftables On a router, Source NAT (SNAT) enables you to change the IP of packets sent through an interface to a specific IP address. The router then replaces the source IP of outgoing packets. Procedure Create a table: Add the prerouting and postrouting chains to the table: Important Even if you do not add a rule to the postrouting chain, the nftables framework requires this chain to match outgoing packet replies. Note that you must pass the -- option to the nft command to prevent the shell from interpreting the negative priority value as an option of the nft command. Add a rule to the postrouting chain that replaces the source IP of outgoing packets through ens3 with 192.0.2.1 : Additional resources Forwarding incoming packets on a specific local port to a different host 42.4.4. Configuring destination NAT using nftables Destination NAT (DNAT) enables you to redirect traffic on a router to a host that is not directly accessible from the internet. For example, with DNAT the router redirects incoming traffic sent to port 80 and 443 to a web server with the IP address 192.0.2.1 . Procedure Create a table: Add the prerouting and postrouting chains to the table: Important Even if you do not add a rule to the postrouting chain, the nftables framework requires this chain to match outgoing packet replies. Note that you must pass the -- option to the nft command to prevent the shell from interpreting the negative priority value as an option of the nft command. Add a rule to the prerouting chain that redirects incoming traffic to port 80 and 443 on the ens3 interface of the router to the web server with the IP address 192.0.2.1 : Depending on your environment, add either a SNAT or masquerading rule to change the source address for packets returning from the web server to the sender: If the ens3 interface uses a dynamic IP addresses, add a masquerading rule: If the ens3 interface uses a static IP address, add a SNAT rule. For example, if the ens3 uses the 198.51.100.1 IP address: Enable packet forwarding: Additional resources NAT types 42.4.5. Configuring a redirect using nftables The redirect feature is a special case of destination network address translation (DNAT) that redirects packets to the local machine depending on the chain hook. For example, you can redirect incoming and forwarded traffic sent to port 22 of the local host to port 2222 . Procedure Create a table: Add the prerouting chain to the table: Note that you must pass the -- option to the nft command to prevent the shell from interpreting the negative priority value as an option of the nft command. Add a rule to the prerouting chain that redirects incoming traffic on port 22 to port 2222 : Additional resources NAT types 42.4.6. Configuring flowtable by using nftables The nftables utility uses the netfilter framework to provide network address translation (NAT) for network traffic and provides the fastpath feature-based flowtable mechanism to accelerate packet forwarding. The flowtable mechanism has the following features: Uses connection tracking to bypass the classic packet forwarding path. Avoids revisiting the routing table by bypassing the classic packet processing. Works only with TCP and UDP protocols. Hardware independent software fast path. Procedure Add an example-table table of inet family: Add an example-flowtable flowtable with ingress hook and filter as a priority type: Add an example-forwardchain flow to the flowtable from a packet processing table: This command adds a flowtable of filter type with forward hook and filter priority. Add a rule with established connection tracking state to offload example-flowtable flow: Verification Verify the properties of example-table : Additional resources nft(8) man page on your system 42.5. Using sets in nftables commands The nftables framework natively supports sets. You can use sets, for example, if a rule should match multiple IP addresses, port numbers, interfaces, or any other match criteria. 42.5.1. Using anonymous sets in nftables An anonymous set contains comma-separated values enclosed in curly brackets, such as { 22, 80, 443 } , that you use directly in a rule. You can use anonymous sets also for IP addresses and any other match criteria. The drawback of anonymous sets is that if you want to change the set, you must replace the rule. For a dynamic solution, use named sets as described in Using named sets in nftables . Prerequisites The example_chain chain and the example_table table in the inet family exists. Procedure For example, to add a rule to example_chain in example_table that allows incoming traffic to port 22 , 80 , and 443 : Optional: Display all chains and their rules in example_table : 42.5.2. Using named sets in nftables The nftables framework supports mutable named sets. A named set is a list or range of elements that you can use in multiple rules within a table. Another benefit over anonymous sets is that you can update a named set without replacing the rules that use the set. When you create a named set, you must specify the type of elements the set contains. You can set the following types: ipv4_addr for a set that contains IPv4 addresses or ranges, such as 192.0.2.1 or 192.0.2.0/24 . ipv6_addr for a set that contains IPv6 addresses or ranges, such as 2001:db8:1::1 or 2001:db8:1::1/64 . ether_addr for a set that contains a list of media access control (MAC) addresses, such as 52:54:00:6b:66:42 . inet_proto for a set that contains a list of internet protocol types, such as tcp . inet_service for a set that contains a list of internet services, such as ssh . mark for a set that contains a list of packet marks. Packet marks can be any positive 32-bit integer value ( 0 to 2147483647 ). Prerequisites The example_chain chain and the example_table table exists. Procedure Create an empty set. The following examples create a set for IPv4 addresses: To create a set that can store multiple individual IPv4 addresses: To create a set that can store IPv4 address ranges: Important To prevent the shell from interpreting the semicolons as the end of the command, you must escape the semicolons with a backslash. Optional: Create rules that use the set. For example, the following command adds a rule to the example_chain in the example_table that will drop all packets from IPv4 addresses in example_set . Because example_set is still empty, the rule has currently no effect. Add IPv4 addresses to example_set : If you create a set that stores individual IPv4 addresses, enter: If you create a set that stores IPv4 ranges, enter: When you specify an IP address range, you can alternatively use the Classless Inter-Domain Routing (CIDR) notation, such as 192.0.2.0/24 in the above example. 42.5.3. Additional resources The Sets section in the nft(8) man page on your system 42.6. Using verdict maps in nftables commands Verdict maps, which are also known as dictionaries, enable nft to perform an action based on packet information by mapping match criteria to an action. 42.6.1. Using anonymous maps in nftables An anonymous map is a { match_criteria : action } statement that you use directly in a rule. The statement can contain multiple comma-separated mappings. The drawback of an anonymous map is that if you want to change the map, you must replace the rule. For a dynamic solution, use named maps as described in Using named maps in nftables . For example, you can use an anonymous map to route both TCP and UDP packets of the IPv4 and IPv6 protocol to different chains to count incoming TCP and UDP packets separately. Procedure Create a new table: Create the tcp_packets chain in example_table : Add a rule to tcp_packets that counts the traffic in this chain: Create the udp_packets chain in example_table Add a rule to udp_packets that counts the traffic in this chain: Create a chain for incoming traffic. For example, to create a chain named incoming_traffic in example_table that filters incoming traffic: Add a rule with an anonymous map to incoming_traffic : The anonymous map distinguishes the packets and sends them to the different counter chains based on their protocol. To list the traffic counters, display example_table : The counters in the tcp_packets and udp_packets chain display both the number of received packets and bytes. 42.6.2. Using named maps in nftables The nftables framework supports named maps. You can use these maps in multiple rules within a table. Another benefit over anonymous maps is that you can update a named map without replacing the rules that use it. When you create a named map, you must specify the type of elements: ipv4_addr for a map whose match part contains an IPv4 address, such as 192.0.2.1 . ipv6_addr for a map whose match part contains an IPv6 address, such as 2001:db8:1::1 . ether_addr for a map whose match part contains a media access control (MAC) address, such as 52:54:00:6b:66:42 . inet_proto for a map whose match part contains an internet protocol type, such as tcp . inet_service for a map whose match part contains an internet services name port number, such as ssh or 22 . mark for a map whose match part contains a packet mark. A packet mark can be any positive 32-bit integer value ( 0 to 2147483647 ). counter for a map whose match part contains a counter value. The counter value can be any positive 64-bit integer value. quota for a map whose match part contains a quota value. The quota value can be any positive 64-bit integer value. For example, you can allow or drop incoming packets based on their source IP address. Using a named map, you require only a single rule to configure this scenario while the IP addresses and actions are dynamically stored in the map. Procedure Create a table. For example, to create a table named example_table that processes IPv4 packets: Create a chain. For example, to create a chain named example_chain in example_table : Important To prevent the shell from interpreting the semicolons as the end of the command, you must escape the semicolons with a backslash. Create an empty map. For example, to create a map for IPv4 addresses: Create rules that use the map. For example, the following command adds a rule to example_chain in example_table that applies actions to IPv4 addresses which are both defined in example_map : Add IPv4 addresses and corresponding actions to example_map : This example defines the mappings of IPv4 addresses to actions. In combination with the rule created above, the firewall accepts packet from 192.0.2.1 and drops packets from 192.0.2.2 . Optional: Enhance the map by adding another IP address and action statement: Optional: Remove an entry from the map: Optional: Display the rule set: 42.6.3. Additional resources The Maps section in the nft(8) man page on your system 42.7. Example: Protecting a LAN and DMZ using an nftables script Use the nftables framework on a RHEL router to write and install a firewall script that protects the network clients in an internal LAN and a web server in a DMZ from unauthorized access from the internet and from other networks. Important This example is only for demonstration purposes and describes a scenario with specific requirements. Firewall scripts highly depend on the network infrastructure and security requirements. Use this example to learn the concepts of nftables firewalls when you write scripts for your own environment. 42.7.1. Network conditions The network in this example has the following conditions: The router is connected to the following networks: The internet through interface enp1s0 The internal LAN through interface enp7s0 The DMZ through enp8s0 The internet interface of the router has both a static IPv4 address ( 203.0.113.1 ) and IPv6 address ( 2001:db8:a::1 ) assigned. The clients in the internal LAN use only private IPv4 addresses from the range 10.0.0.0/24 . Consequently, traffic from the LAN to the internet requires source network address translation (SNAT). The administrator PCs in the internal LAN use the IP addresses 10.0.0.100 and 10.0.0.200 . The DMZ uses public IP addresses from the ranges 198.51.100.0/24 and 2001:db8:b::/56 . The web server in the DMZ uses the IP addresses 198.51.100.5 and 2001:db8:b::5 . The router acts as a caching DNS server for hosts in the LAN and DMZ. 42.7.2. Security requirements to the firewall script The following are the requirements to the nftables firewall in the example network: The router must be able to: Recursively resolve DNS queries. Perform all connections on the loopback interface. Clients in the internal LAN must be able to: Query the caching DNS server running on the router. Access the HTTPS server in the DMZ. Access any HTTPS server on the internet. The PCs of the administrators must be able to access the router and every server in the DMZ using SSH. The web server in the DMZ must be able to: Query the caching DNS server running on the router. Access HTTPS servers on the internet to download updates. Hosts on the internet must be able to: Access the HTTPS servers in the DMZ. Additionally, the following security requirements exists: Connection attempts that are not explicitly allowed should be dropped. Dropped packets should be logged. 42.7.3. Configuring logging of dropped packets to a file By default, systemd logs kernel messages, such as for dropped packets, to the journal. Additionally, you can configure the rsyslog service to log such entries to a separate file. To ensure that the log file does not grow infinitely, configure a rotation policy. Prerequisites The rsyslog package is installed. The rsyslog service is running. Procedure Create the /etc/rsyslog.d/nftables.conf file with the following content: :msg, startswith, "nft drop" -/var/log/nftables.log & stop Using this configuration, the rsyslog service logs dropped packets to the /var/log/nftables.log file instead of /var/log/messages . Restart the rsyslog service: Create the /etc/logrotate.d/nftables file with the following content to rotate /var/log/nftables.log if the size exceeds 10 MB: /var/log/nftables.log { size +10M maxage 30 sharedscripts postrotate /usr/bin/systemctl kill -s HUP rsyslog.service >/dev/null 2>&1 || true endscript } The maxage 30 setting defines that logrotate removes rotated logs older than 30 days during the rotation operation. Additional resources rsyslog.conf(5) and logrotate(8) man pages on your system 42.7.4. Writing and activating the nftables script This example is an nftables firewall script that runs on a RHEL router and protects the clients in an internal LAN and a web server in a DMZ. For details about the network and the requirements for the firewall used in the example, see Network conditions and Security requirements to the firewall script . Warning This nftables firewall script is only for demonstration purposes. Do not use it without adapting it to your environments and security requirements. Prerequisites The network is configured as described in Network conditions . Procedure Create the /etc/nftables/firewall.nft script with the following content: # Remove all rules flush ruleset # Table for both IPv4 and IPv6 rules table inet nftables_svc { # Define variables for the interface name define INET_DEV = enp1s0 define LAN_DEV = enp7s0 define DMZ_DEV = enp8s0 # Set with the IPv4 addresses of admin PCs set admin_pc_ipv4 { type ipv4_addr elements = { 10.0.0.100, 10.0.0.200 } } # Chain for incoming trafic. Default policy: drop chain INPUT { type filter hook input priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # Accept incoming traffic on loopback interface iifname lo accept # Allow request from LAN and DMZ to local DNS server iifname { USDLAN_DEV, USDDMZ_DEV } meta l4proto { tcp, udp } th dport 53 accept # Allow admins PCs to access the router using SSH iifname USDLAN_DEV ip saddr @admin_pc_ipv4 tcp dport 22 accept # Last action: Log blocked packets # (packets that were not accepted in rules in this chain) log prefix "nft drop IN : " } # Chain for outgoing traffic. Default policy: drop chain OUTPUT { type filter hook output priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # Accept outgoing traffic on loopback interface oifname lo accept # Allow local DNS server to recursively resolve queries oifname USDINET_DEV meta l4proto { tcp, udp } th dport 53 accept # Last action: Log blocked packets log prefix "nft drop OUT: " } # Chain for forwarding traffic. Default policy: drop chain FORWARD { type filter hook forward priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # IPv4 access from LAN and internet to the HTTPS server in the DMZ iifname { USDLAN_DEV, USDINET_DEV } oifname USDDMZ_DEV ip daddr 198.51.100.5 tcp dport 443 accept # IPv6 access from internet to the HTTPS server in the DMZ iifname USDINET_DEV oifname USDDMZ_DEV ip6 daddr 2001:db8:b::5 tcp dport 443 accept # Access from LAN and DMZ to HTTPS servers on the internet iifname { USDLAN_DEV, USDDMZ_DEV } oifname USDINET_DEV tcp dport 443 accept # Last action: Log blocked packets log prefix "nft drop FWD: " } # Postrouting chain to handle SNAT chain postrouting { type nat hook postrouting priority srcnat; policy accept; # SNAT for IPv4 traffic from LAN to internet iifname USDLAN_DEV oifname USDINET_DEV snat ip to 203.0.113.1 } } Include the /etc/nftables/firewall.nft script in the /etc/sysconfig/nftables.conf file: Enable IPv4 forwarding: Enable and start the nftables service: Verification Optional: Verify the nftables rule set: Try to perform an access that the firewall prevents. For example, try to access the router using SSH from the DMZ: Depending on your logging settings, search: The systemd journal for the blocked packets: The /var/log/nftables.log file for the blocked packets: 42.8. Configuring port forwarding using nftables Port forwarding enables administrators to forward packets sent to a specific destination port to a different local or remote port. For example, if your web server does not have a public IP address, you can set a port forwarding rule on your firewall that forwards incoming packets on port 80 and 443 on the firewall to the web server. With this firewall rule, users on the internet can access the web server using the IP or host name of the firewall. 42.8.1. Forwarding incoming packets to a different local port You can use nftables to forward packets. For example, you can forward incoming IPv4 packets on port 8022 to port 22 on the local system. Procedure Create a table named nat with the ip address family: Add the prerouting and postrouting chains to the table: Note Pass the -- option to the nft command to prevent the shell from interpreting the negative priority value as an option of the nft command. Add a rule to the prerouting chain that redirects incoming packets on port 8022 to the local port 22 : 42.8.2. Forwarding incoming packets on a specific local port to a different host You can use a destination network address translation (DNAT) rule to forward incoming packets on a local port to a remote host. This enables users on the internet to access a service that runs on a host with a private IP address. For example, you can forward incoming IPv4 packets on the local port 443 to the same port number on the remote system with the 192.0.2.1 IP address. Prerequisites You are logged in as the root user on the system that should forward the packets. Procedure Create a table named nat with the ip address family: Add the prerouting and postrouting chains to the table: Note Pass the -- option to the nft command to prevent the shell from interpreting the negative priority value as an option of the nft command. Add a rule to the prerouting chain that redirects incoming packets on port 443 to the same port on 192.0.2.1 : Add a rule to the postrouting chain to masquerade outgoing traffic: Enable packet forwarding: 42.9. Using nftables to limit the amount of connections You can use nftables to limit the number of connections or to block IP addresses that attempt to establish a given amount of connections to prevent them from using too many system resources. 42.9.1. Limiting the number of connections by using nftables By using the ct count parameter of the nft utility, you can limit the number of simultaneous connections per IP address. For example, you can use this feature to configure that each source IP address can only establish two parallel SSH connections to a host. Procedure Create the filter table with the inet address family: Add the input chain to the inet filter table: Create a dynamic set for IPv4 addresses: Add a rule to the input chain that allows only two simultaneous incoming connections to the SSH port (22) from an IPv4 address and rejects all further connections from the same IP: Verification Establish more than two new simultaneous SSH connections from the same IP address to the host. Nftables refuses connections to the SSH port if two connections are already established. Display the limit-ssh meter: The elements entry displays addresses that currently match the rule. In this example, elements lists IP addresses that have active connections to the SSH port. Note that the output does not display the number of active connections or if connections were rejected. 42.9.2. Blocking IP addresses that attempt more than ten new incoming TCP connections within one minute You can temporarily block hosts that are establishing more than ten IPv4 TCP connections within one minute. Procedure Create the filter table with the ip address family: Add the input chain to the filter table: Add a rule that drops all packets from source addresses that attempt to establish more than ten TCP connections within one minute: The timeout 5m parameter defines that nftables automatically removes entries after five minutes to prevent that the meter fills up with stale entries. Verification To display the meter's content, enter: 42.10. Debugging nftables rules The nftables framework provides different options for administrators to debug rules and if packets match them. 42.10.1. Creating a rule with a counter To identify if a rule is matched, you can use a counter. For more information about a procedure that adds a counter to an existing rule, see Adding a counter to an existing rule . Prerequisites The chain to which you want to add the rule exists. Procedure Add a new rule with the counter parameter to the chain. The following example adds a rule with a counter that allows TCP traffic on port 22 and counts the packets and traffic that match this rule: To display the counter values: 42.10.2. Adding a counter to an existing rule To identify if a rule is matched, you can use a counter. For more information about a procedure that adds a new rule with a counter, see Creating a rule with the counter . Prerequisites The rule to which you want to add the counter exists. Procedure Display the rules in the chain including their handles: Add the counter by replacing the rule but with the counter parameter. The following example replaces the rule displayed in the step and adds a counter: To display the counter values: 42.10.3. Monitoring packets that match an existing rule The tracing feature in nftables in combination with the nft monitor command enables administrators to display packets that match a rule. You can enable tracing for a rule an use it to monitoring packets that match this rule. Prerequisites The rule to which you want to add the counter exists. Procedure Display the rules in the chain including their handles: Add the tracing feature by replacing the rule but with the meta nftrace set 1 parameters. The following example replaces the rule displayed in the step and enables tracing: Use the nft monitor command to display the tracing. The following example filters the output of the command to display only entries that contain inet example_table example_chain : Warning Depending on the number of rules with tracing enabled and the amount of matching traffic, the nft monitor command can display a lot of output. Use grep or other utilities to filter the output. 42.11. Backing up and restoring the nftables rule set You can backup nftables rules to a file and later restoring them. Also, administrators can use a file with the rules to, for example, transfer the rules to a different server. 42.11.1. Backing up the nftables rule set to a file You can use the nft utility to back up the nftables rule set to a file. Procedure To backup nftables rules: In a format produced by nft list ruleset format: In JSON format: 42.11.2. Restoring the nftables rule set from a file You can restore the nftables rule set from a file. Procedure To restore nftables rules: If the file to restore is in the format produced by nft list ruleset or contains nft commands directly: If the file to restore is in JSON format: 42.12. Additional resources Using nftables in Red Hat Enterprise Linux 8 What comes after iptables? Its successor, of course: nftables Firewalld: The Future is nftables
[ "table <table_address_family> <table_name> { }", "nft add table <table_address_family> <table_name>", "table <table_address_family> <table_name> { chain <chain_name> { type <type> hook <hook> priority <priority> policy <policy> ; } }", "nft add chain <table_address_family> <table_name> <chain_name> { type <type> hook <hook> priority <priority> \\; policy <policy> \\; }", "table <table_address_family> <table_name> { chain <chain_name> { type <type> hook <hook> priority <priority> ; policy <policy> ; <rule> } }", "nft add rule <table_address_family> <table_name> <chain_name> <rule>", "nft add table inet nftables_svc", "nft add chain inet nftables_svc INPUT { type filter hook input priority filter \\; policy accept \\; }", "nft add rule inet nftables_svc INPUT tcp dport 22 accept nft add rule inet nftables_svc INPUT tcp dport 443 accept nft add rule inet nftables_svc INPUT reject with icmpx type port-unreachable", "nft -a list table inet nftables_svc table inet nftables_svc { # handle 13 chain INPUT { # handle 1 type filter hook input priority filter ; policy accept ; tcp dport 22 accept # handle 2 tcp dport 443 accept # handle 3 reject # handle 4 } }", "nft insert rule inet nftables_svc INPUT position 3 tcp dport 636 accept", "nft add rule inet nftables_svc INPUT position 3 tcp dport 80 accept", "nft -a list table inet nftables_svc table inet nftables_svc { # handle 13 chain INPUT { # handle 1 type filter hook input priority filter ; policy accept ; tcp dport 22 accept # handle 2 tcp dport 636 accept # handle 5 tcp dport 443 accept # handle 3 tcp dport 80 accept # handle 6 reject # handle 4 } }", "nft delete rule inet nftables_svc INPUT handle 6", "nft -a list table inet nftables_svc table inet nftables_svc { # handle 13 chain INPUT { # handle 1 type filter hook input priority filter ; policy accept ; tcp dport 22 accept # handle 2 tcp dport 636 accept # handle 5 tcp dport 443 accept # handle 3 reject # handle 4 } }", "nft flush chain inet nftables_svc INPUT", "nft list table inet nftables_svc table inet nftables_svc { chain INPUT { type filter hook input priority filter; policy accept } }", "nft delete chain inet nftables_svc INPUT", "nft list table inet nftables_svc table inet nftables_svc { }", "nft delete table inet nftables_svc", "iptables-save >/root/iptables.dump ip6tables-save >/root/ip6tables.dump", "iptables-restore-translate -f /root/iptables.dump > /etc/nftables/ruleset-migrated-from-iptables.nft ip6tables-restore-translate -f /root/ip6tables.dump > /etc/nftables/ruleset-migrated-from-ip6tables.nft", "include \"/etc/nftables/ruleset-migrated-from-iptables.nft\" include \"/etc/nftables/ruleset-migrated-from-ip6tables.nft\"", "systemctl disable --now iptables", "systemctl enable --now nftables", "nft list ruleset", "iptables-translate -A INPUT -s 192.0.2.0/24 -j ACCEPT nft add rule ip filter INPUT ip saddr 192.0.2.0/24 counter accept", "iptables-translate -A INPUT -j CHECKSUM --checksum-fill nft # -A INPUT -j CHECKSUM --checksum-fill", "nft list table inet firewalld nft list table ip firewalld nft list table ip6 firewalld", "#!/usr/sbin/nft -f Flush the rule set flush ruleset table inet example_table { chain example_chain { # Chain for incoming packets that drops all packets that # are not explicitly allowed by any rule in this chain type filter hook input priority 0; policy drop; # Accept connections to port 22 (ssh) tcp dport ssh accept } }", "#!/usr/sbin/nft -f Flush the rule set flush ruleset Create a table add table inet example_table Create a chain for incoming packets that drops all packets that are not explicitly allowed by any rule in this chain add chain inet example_table example_chain { type filter hook input priority 0 ; policy drop ; } Add a rule that accepts connections to port 22 (ssh) add rule inet example_table example_chain tcp dport ssh accept", "nft -f /etc/nftables/<example_firewall_script>.nft", "#!/usr/sbin/nft -f", "chown root /etc/nftables/<example_firewall_script>.nft", "chmod u+x /etc/nftables/<example_firewall_script>.nft", "/etc/nftables/<example_firewall_script>.nft", "Flush the rule set flush ruleset add table inet example_table # Create a table", "define INET_DEV = enp1s0", "add rule inet example_table example_chain iifname USDINET_DEV tcp dport ssh accept", "define DNS_SERVERS = { 192.0.2.1 , 192.0.2.2 }", "add rule inet example_table example_chain ip daddr USDDNS_SERVERS accept", "include \"example.nft\"", "include \"/etc/nftables/rulesets/*.nft\"", "include \"/etc/nftables/_example_.nft\"", "systemctl start nftables", "systemctl enable nftables", "nft add table nat", "nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule nat postrouting oifname \" ens3 \" masquerade", "nft add table nat", "nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule nat postrouting oifname \" ens3 \" snat to 192.0.2.1", "nft add table nat", "nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule nat prerouting iifname ens3 tcp dport { 80, 443 } dnat to 192.0.2.1", "nft add rule nat postrouting oifname \"ens3\" masquerade", "nft add rule nat postrouting oifname \"ens3\" snat to 198.51.100.1", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nft add table nat", "nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; }", "nft add rule nat prerouting tcp dport 22 redirect to 2222", "nft add table inet <example-table>", "nft add flowtable inet <example-table> <example-flowtable> { hook ingress priority filter \\; devices = { enp1s0, enp7s0 } \\; }", "nft add chain inet <example-table> <example-forwardchain> { type filter hook forward priority filter \\; }", "nft add rule inet <example-table> <example-forwardchain> ct state established flow add @ <example-flowtable>", "nft list table inet <example-table> table inet example-table { flowtable example-flowtable { hook ingress priority filter devices = { enp1s0, enp7s0 } } chain example-forwardchain { type filter hook forward priority filter; policy accept; ct state established flow add @example-flowtable } }", "nft add rule inet example_table example_chain tcp dport { 22, 80, 443 } accept", "nft list table inet example_table table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport { ssh, http, https } accept } }", "nft add set inet example_table example_set { type ipv4_addr \\; }", "nft add set inet example_table example_set { type ipv4_addr \\; flags interval \\; }", "nft add rule inet example_table example_chain ip saddr @ example_set drop", "nft add element inet example_table example_set { 192.0.2.1, 192.0.2.2 }", "nft add element inet example_table example_set { 192.0.2.0-192.0.2.255 }", "nft add table inet example_table", "nft add chain inet example_table tcp_packets", "nft add rule inet example_table tcp_packets counter", "nft add chain inet example_table udp_packets", "nft add rule inet example_table udp_packets counter", "nft add chain inet example_table incoming_traffic { type filter hook input priority 0 \\; }", "nft add rule inet example_table incoming_traffic ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets }", "nft list table inet example_table table inet example_table { chain tcp_packets { counter packets 36379 bytes 2103816 } chain udp_packets { counter packets 10 bytes 1559 } chain incoming_traffic { type filter hook input priority filter; policy accept; ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets } } }", "nft add table ip example_table", "nft add chain ip example_table example_chain { type filter hook input priority 0 \\; }", "nft add map ip example_table example_map { type ipv4_addr : verdict \\; }", "nft add rule example_table example_chain ip saddr vmap @ example_map", "nft add element ip example_table example_map { 192.0.2.1 : accept, 192.0.2.2 : drop }", "nft add element ip example_table example_map { 192.0.2.3 : accept }", "nft delete element ip example_table example_map { 192.0.2.1 }", "nft list ruleset table ip example_table { map example_map { type ipv4_addr : verdict elements = { 192.0.2.2 : drop, 192.0.2.3 : accept } } chain example_chain { type filter hook input priority filter; policy accept; ip saddr vmap @example_map } }", ":msg, startswith, \"nft drop\" -/var/log/nftables.log & stop", "systemctl restart rsyslog", "/var/log/nftables.log { size +10M maxage 30 sharedscripts postrotate /usr/bin/systemctl kill -s HUP rsyslog.service >/dev/null 2>&1 || true endscript }", "Remove all rules flush ruleset Table for both IPv4 and IPv6 rules table inet nftables_svc { # Define variables for the interface name define INET_DEV = enp1s0 define LAN_DEV = enp7s0 define DMZ_DEV = enp8s0 # Set with the IPv4 addresses of admin PCs set admin_pc_ipv4 { type ipv4_addr elements = { 10.0.0.100, 10.0.0.200 } } # Chain for incoming trafic. Default policy: drop chain INPUT { type filter hook input priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # Accept incoming traffic on loopback interface iifname lo accept # Allow request from LAN and DMZ to local DNS server iifname { USDLAN_DEV, USDDMZ_DEV } meta l4proto { tcp, udp } th dport 53 accept # Allow admins PCs to access the router using SSH iifname USDLAN_DEV ip saddr @admin_pc_ipv4 tcp dport 22 accept # Last action: Log blocked packets # (packets that were not accepted in previous rules in this chain) log prefix \"nft drop IN : \" } # Chain for outgoing traffic. Default policy: drop chain OUTPUT { type filter hook output priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # Accept outgoing traffic on loopback interface oifname lo accept # Allow local DNS server to recursively resolve queries oifname USDINET_DEV meta l4proto { tcp, udp } th dport 53 accept # Last action: Log blocked packets log prefix \"nft drop OUT: \" } # Chain for forwarding traffic. Default policy: drop chain FORWARD { type filter hook forward priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # IPv4 access from LAN and internet to the HTTPS server in the DMZ iifname { USDLAN_DEV, USDINET_DEV } oifname USDDMZ_DEV ip daddr 198.51.100.5 tcp dport 443 accept # IPv6 access from internet to the HTTPS server in the DMZ iifname USDINET_DEV oifname USDDMZ_DEV ip6 daddr 2001:db8:b::5 tcp dport 443 accept # Access from LAN and DMZ to HTTPS servers on the internet iifname { USDLAN_DEV, USDDMZ_DEV } oifname USDINET_DEV tcp dport 443 accept # Last action: Log blocked packets log prefix \"nft drop FWD: \" } # Postrouting chain to handle SNAT chain postrouting { type nat hook postrouting priority srcnat; policy accept; # SNAT for IPv4 traffic from LAN to internet iifname USDLAN_DEV oifname USDINET_DEV snat ip to 203.0.113.1 } }", "include \"/etc/nftables/firewall.nft\"", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "systemctl enable --now nftables", "nft list ruleset", "ssh router.example.com ssh: connect to host router.example.com port 22 : Network is unreachable", "journalctl -k -g \"nft drop\" Oct 14 17:27:18 router kernel: nft drop IN : IN=enp8s0 OUT= MAC=... SRC=198.51.100.5 DST=198.51.100.1 ... PROTO=TCP SPT=40464 DPT=22 ... SYN", "Oct 14 17:27:18 router kernel: nft drop IN : IN=enp8s0 OUT= MAC=... SRC=198.51.100.5 DST=198.51.100.1 ... PROTO=TCP SPT=40464 DPT=22 ... SYN", "nft add table ip nat", "nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \\; }", "nft add rule ip nat prerouting tcp dport 8022 redirect to :22", "nft add table ip nat", "nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain ip nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule ip nat prerouting tcp dport 443 dnat to 192.0.2.1", "nft add rule ip nat postrouting daddr 192.0.2.1 masquerade", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nft add table inet filter", "nft add chain inet filter input { type filter hook input priority 0 \\; }", "nft add set inet filter limit-ssh { type ipv4_addr\\; flags dynamic \\;}", "nft add rule inet filter input tcp dport ssh ct state new add @limit-ssh { ip saddr ct count over 2 } counter reject", "nft list set inet filter limit-ssh table inet filter { set limit-ssh { type ipv4_addr size 65535 flags dynamic elements = { 192.0.2.1 ct count over 2 , 192.0.2.2 ct count over 2 } } }", "nft add table ip filter", "nft add chain ip filter input { type filter hook input priority 0 \\; }", "nft add rule ip filter input ip protocol tcp ct state new, untracked meter ratemeter { ip saddr timeout 5m limit rate over 10/minute } drop", "nft list meter ip filter ratemeter table ip filter { meter ratemeter { type ipv4_addr size 65535 flags dynamic,timeout elements = { 192.0.2.1 limit rate over 10/minute timeout 5m expires 4m58s224ms } } }", "nft add rule inet example_table example_chain tcp dport 22 counter accept", "nft list ruleset table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport ssh counter packets 6872 bytes 105448565 accept } }", "nft --handle list chain inet example_table example_chain table inet example_table { chain example_chain { # handle 1 type filter hook input priority filter; policy accept; tcp dport ssh accept # handle 4 } }", "nft replace rule inet example_table example_chain handle 4 tcp dport 22 counter accept", "nft list ruleset table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport ssh counter packets 6872 bytes 105448565 accept } }", "nft --handle list chain inet example_table example_chain table inet example_table { chain example_chain { # handle 1 type filter hook input priority filter; policy accept; tcp dport ssh accept # handle 4 } }", "nft replace rule inet example_table example_chain handle 4 tcp dport 22 meta nftrace set 1 accept", "nft monitor | grep \"inet example_table example_chain\" trace id 3c5eb15e inet example_table example_chain packet: iif \"enp1s0\" ether saddr 52:54:00:17:ff:e4 ether daddr 52:54:00:72:2f:6e ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49710 ip protocol tcp ip length 60 tcp sport 56728 tcp dport ssh tcp flags == syn tcp window 64240 trace id 3c5eb15e inet example_table example_chain rule tcp dport ssh nftrace set 1 accept (verdict accept)", "nft list ruleset > file .nft", "nft -j list ruleset > file .json", "nft -f file .nft", "nft -j -f file .json" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/getting-started-with-nftables_configuring-and-managing-networking
Chapter 8. TokenRequest [authentication.k8s.io/v1]
Chapter 8. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenRequestSpec contains client provided parameters of a token request. status object TokenRequestStatus is the result of a token request. 8.1.1. .spec Description TokenRequestSpec contains client provided parameters of a token request. Type object Required audiences Property Type Description audiences array (string) Audiences are the intendend audiences of the token. A recipient of a token must identify themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences. boundObjectRef object BoundObjectReference is a reference to an object that a token is bound to. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the request. The token issuer may return a token with a different validity duration so a client needs to check the 'expiration' field in a response. 8.1.2. .spec.boundObjectRef Description BoundObjectReference is a reference to an object that a token is bound to. Type object Property Type Description apiVersion string API version of the referent. kind string Kind of the referent. Valid kinds are 'Pod' and 'Secret'. name string Name of the referent. uid string UID of the referent. 8.1.3. .status Description TokenRequestStatus is the result of a token request. Type object Required token expirationTimestamp Property Type Description expirationTimestamp Time ExpirationTimestamp is the time of expiration of the returned token. token string Token is the opaque bearer token. 8.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token POST : create token of a ServiceAccount 8.2.1. /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token Table 8.1. Global path parameters Parameter Type Description name string name of the TokenRequest namespace string object name and auth scope, such as for teams and projects Table 8.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create token of a ServiceAccount Table 8.3. Body parameters Parameter Type Description body TokenRequest schema Table 8.4. HTTP responses HTTP code Reponse body 200 - OK TokenRequest schema 201 - Created TokenRequest schema 202 - Accepted TokenRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authorization_apis/tokenrequest-authentication-k8s-io-v1
Chapter 6. Backing OpenShift Container Platform applications with OpenShift Data Foundation
Chapter 6. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads Deployments . Click Workloads Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_and_allocating_storage_resources/backing-openshift-container-platform-applications-with-openshift-data-foundation_rhodf
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/scaling_deployments_with_compute_cells/making-open-source-more-inclusive
API overview
API overview OpenShift Container Platform 4.12 Overview content for the OpenShift Container Platform API Red Hat OpenShift Documentation Team
[ "oc debug node/<node>", "chroot /host", "systemctl cat kubelet", "/etc/systemd/system/kubelet.service.d/20-logging.conf [Service] Environment=\"KUBELET_LOG_LEVEL=2\"", "echo -e \"[Service]\\nEnvironment=\\\"KUBELET_LOG_LEVEL=8\\\"\" > /etc/systemd/system/kubelet.service.d/30-logging.conf", "systemctl daemon-reload", "systemctl restart kubelet", "rm -f /etc/systemd/system/kubelet.service.d/30-logging.conf", "systemctl daemon-reload", "systemctl restart kubelet", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-master-kubelet-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - name: kubelet.service enabled: true dropins: - name: 30-logging.conf contents: | [Service] Environment=\"KUBELET_LOG_LEVEL=2\"", "oc adm node-logs --role master -u kubelet", "oc adm node-logs --role worker -u kubelet", "journalctl -b -f -u kubelet.service", "sudo tail -f /var/log/containers/*", "- for n in USD(oc get node --no-headers | awk '{print USD1}'); do oc adm node-logs USDn | gzip > USDn.log.gz; done" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/api_overview/index
Chapter 39. Managing subID ranges manually
Chapter 39. Managing subID ranges manually In a containerized environment, sometimes an IdM user needs to assign subID ranges manually. The following instructions describe how to manage the subID ranges. 39.1. Generating subID ranges using IdM CLI As an Identity Management (IdM) administrator, you can generate a subID range and assign it to IdM users. Prerequisites The IdM users exist. You have obtained an IdM admin ticket-granting ticket (TGT). See Using kinit to log in to IdM manually for more details. You have root access to the IdM host where you are executing the procedure. Procedure Optional: Check for existing subID ranges: If a subID range does not exist, select one of the following options: Generate and assign a subID range to an IdM user: Generate and assign subID ranges to all IdM users: Optional: Assign subID ranges to new IdM users by default: Verification Verify that the user has a subID range assigned: 39.2. Generating subID ranges using IdM WebUI interface As an Identity Management (IdM) administrator, you can generate a subID range and assign it to a user in the IdM WebUI interface. Prerequisites The IdM user exists. You have obtained an IdM admin Kerberos ticket (TGT). See Logging in to IdM in the Web UI: Using a Kerberos ticket for more details. You have root access to the IdM host where you are executing the procedure. Procedure In the IdM WebUI interface expand the Subordinate IDs tab and choose the Subordinate IDs option. When the Subordinate IDs interface appears, click the Add button in the upper-right corner of the interface. The Add subid window appears. In the Add subid window choose an owner, that is the user to whom you want to assign a subID range. Click the Add button. Verification View the table under the Subordinate IDs tab. A new record shows in the table. The owner is the user to whom you assigned the subID range. 39.3. Viewing subID information about IdM users by using IdM CLI As an Identity Management (IdM) user, you can search for IdM user subID ranges and view the related information. Prerequisites You have configured a subID range on the IdM client . You have obtained an IdM user ticket-granting ticket (TGT). See Using kinit to log in to IdM manually for more details. Procedure To view the details about a subID range: If you know the unique ID hash of the Identity Management (IdM) user that is the owner of the range: If you know a specific subID from that range: 39.4. Listing subID ranges using the getsubid command As a system administrator, you can use the command-line interface to list the subID ranges of Identity Management (IdM) or local users. Prerequisites The idmuser user exists in IdM. The shadow-utils-subid package is installed. You can edit the /etc/nsswitch.conf file. Procedure Open the /etc/nsswitch.conf file and configure the shadow-utils utility to use IdM subID ranges by setting the subid variable to the sss value: Note You can provide only one value for the subid field. Setting the subid field to the file value or no value instead of sss configures the shadow-utils utility to use the subID ranges from the /etc/subuid and /etc/subgid files. List the subID range for an IdM user: The first value, 2147483648, indicates the subID range start. The second value, 65536, indicates the size of the range.
[ "ipa subid-find", "ipa subid-generate --owner=idmuser Added subordinate id \"359dfcef-6b76-4911-bd37-bb5b66b8c418\" Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Description: auto-assigned subid Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536", "/usr/libexec/ipa/ipa-subids --all-users Found 2 user(s) without subordinate ids Processing user 'user4' (1/2) Processing user 'user5' (2/2) Updated 2 user(s) The ipa-subids command was successful", "ipa config-mod --user-default-subid=True", "ipa subid-find --owner=idmuser 1 subordinate id matched Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536 Number of entries returned 1", "ipa subid-show 359dfcef-6b76-4911-bd37-bb5b66b8c418 Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536", "ipa subid-match --subuid=2147483670 1 subordinate id matched Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: uid=idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536 Number of entries returned 1", "[...] subid: sss", "getsubids idmuser 0: idmuser 2147483648 65536" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/assembly_managing-subid-ranges-manually_configuring-and-managing-idm
Chapter 45. gRPC
Chapter 45. gRPC Since Camel 2.19 Both producer and consumer are supported The gRPC component allows you to call or expose Remote Procedure Call (RPC) services using Protocol Buffers (protobuf) exchange format over HTTP/2 transport. 45.1. Dependencies When using grpc with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-grpc-starter</artifactId> </dependency> 45.2. URI format 45.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 45.3.1. Configuring component options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 45.3.2. Configuring endpoint options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 45.4. Component Options The gRPC component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 45.5. Endpoint Options The gRPC endpoint is configured using URI syntax: with the following path and query parameters: 45.5.1. Path Parameters (3 parameters) Name Description Default Type host (common) Required The gRPC server host name. This is localhost or 0.0.0.0 when being a consumer or remote server host name when using producer. String port (common) Required The gRPC local or remote server port. int service (common) Required Fully qualified service name from the protocol buffer descriptor file (package dot service definition name). String 45.5.2. Query Parameters (29 parameters) Name Description Default Type flowControlWindow (common) The HTTP/2 flow control window size (MiB). 1048576 int maxMessageSize (common) The maximum message size allowed to be received/sent (MiB). 4194304 int autoDiscoverServerInterceptors (consumer) Setting the autoDiscoverServerInterceptors mechanism, if true, the component will look for a ServerInterceptor instance in the registry automatically otherwise it will skip that checking. true boolean consumerStrategy (consumer) This option specifies the top-level strategy for processing service requests and responses in streaming mode. If an aggregation strategy is selected, all requests will be accumulated in the list, then transferred to the flow, and the accumulated responses will be sent to the sender. If a propagation strategy is selected, request is sent to the stream, and the response will be immediately sent back to the sender. Enum values: AGGREGATION PROPAGATION PROPAGATION GrpcConsumerStrategy forwardOnCompleted (consumer) Determines if onCompleted events should be pushed to the Camel route. false boolean forwardOnError (consumer) Determines if onError events should be pushed to the Camel route. Exceptions will be set as message body. false boolean maxConcurrentCallsPerConnection (consumer) The maximum number of concurrent calls permitted for each incoming server connection. 2147483647 int routeControlledStreamObserver (consumer) Lets the route to take control over stream observer. If this value is set to true, then the response observer of gRPC call will be set with the name GrpcConstants.GRPC_RESPONSE_OBSERVER in the Exchange object. Please note that the stream observer's onNext(), onError(), onCompleted() methods should be called in the route. false boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern autoDiscoverClientInterceptors (producer) Setting the autoDiscoverClientInterceptors mechanism, if true, the component will look for a ClientInterceptor instance in the registry automatically otherwise it will skip that checking. true boolean method (producer) gRPC method name. String producerStrategy (producer) The mode used to communicate with a remote gRPC server. In SIMPLE mode a single exchange is translated into a remote procedure call. In STREAMING mode all exchanges will be sent within the same request (input and output of the recipient gRPC service must be of type 'stream'). Enum values: SIMPLE STREAMING SIMPLE GrpcProducerStrategy streamRepliesTo (producer) When using STREAMING client mode, it indicates the endpoint where responses should be forwarded. String userAgent (producer) The user agent header passed to the server. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean authenticationType (security) Authentication method type in advance to the SSL/TLS negotiation. Enum values: NONE GOOGLE JWT NONE GrpcAuthType jwtAlgorithm (security) JSON Web Token sign algorithm. Enum values: HMAC256 HMAC384 HMAC512 HMAC256 JwtAlgorithm jwtIssuer (security) JSON Web Token issuer. String jwtSecret (security) JSON Web Token secret. String jwtSubject (security) JSON Web Token subject. String keyCertChainResource (security) The X.509 certificate chain file resource in PEM format link. String keyPassword (security) The PKCS#8 private key file password. String keyResource (security) The PKCS#8 private key file resource in PEM format link. String negotiationType (security) Identifies the security negotiation type used for HTTP/2 communication. Enum values: TLS PLAINTEXT_UPGRADE PLAINTEXT PLAINTEXT NegotiationType serviceAccountResource (security) Service Account key file in JSON format resource link supported by the Google Cloud SDK. String trustCertCollectionResource (security) The trusted certificates collection file resource in PEM format for verifying the remote endpoint's certificate. String 45.6. Message Headers The gRPC component supports 3 message header(s), which is/are listed below: Name Description Default Type CamelGrpcMethodName (consumer) Constant: GRPC_METHOD_NAME_HEADER Method name handled by the consumer service. String CamelGrpcUserAgent (consumer) Constant: GRPC_USER_AGENT_HEADER If provided, the given agent will prepend the gRPC library's user agent information. String CamelGrpcEventType (consumer) Constant: GRPC_EVENT_TYPE_HEADER Received event type from the sent request. Possible values: onNext onCompleted onError. String 45.7. Transport security and authentication support The following authentication mechanisms are built-in to gRPC and available in this component: SSL/TLS: gRPC has SSL/TLS integration and promotes the use of SSL/TLS to authenticate the server, and to encrypt all the data exchanged between the client and the server. Optional mechanisms are available for clients to provide certificates for mutual authentication. Token-based authentication with Google: gRPC provides a generic mechanism to attach metadata based credentials to requests and responses. Additional support for acquiring access tokens while accessing Google APIs through gRPC is provided. In general this mechanism must be used as well as SSL/TLS on the channel. To enable these features the following component properties combinations must be configured: Num. Option Parameter Value Required/Optional 1 SSL/TLS negotiationType TLS Required keyCertChainResource Required keyResource Required keyPassword Optional trustCertCollectionResource Optional 2 Token-based authentication with Google API authenticationType GOOGLE Required negotiationType TLS Required serviceAccountResource Required 3 Custom JSON Web Token implementation authentication authenticationType JWT Required negotiationType NONE or TLS Optional. The TLS/SSL not checking for this type, but strongly recommended. jwtAlgorithm HMAC256(default) or (HMAC384,HMAC512) Optional jwtSecret Required jwtIssuer Optional jwtSubject Optional 45.8. gRPC producer resource type mapping The table below shows the types of objects in the message body, depending on the types (simple or stream) of incoming and outgoing parameters, as well as the invocation style (synchronous or asynchronous). Please note, that invocation of the procedures with incoming stream parameter in asynchronous style are not allowed. Invocation style Request type Response type Request Body Type Result Body Type synchronous simple simple Object Object synchronous simple stream Object List<Object> synchronous stream simple not allowed not allowed synchronous stream stream not allowed not allowed asynchronous simple simple Object List<Object> asynchronous simple stream Object List<Object> asynchronous stream simple Object or List<Object> List<Object> asynchronous stream stream Object or List<Object> List<Object> 45.9. Examples Below is a simple synchronous method invoke with host and port parameters: from("direct:grpc-sync") .to("grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=sendPing&synchronous=true"); <route> <from uri="direct:grpc-sync" /> <to uri="grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=sendPing&synchronous=true"/> </route> An asynchronous method invokes from("direct:grpc-async") .to("grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=pingAsyncResponse"); a gRPC service consumer with propagation consumer strategy: from("grpc://localhost:1101/org.apache.camel.component.grpc.PingPong?consumerStrategy=PROPAGATION") .to("direct:grpc-service"); gRPC service producer with streaming producer strategy (requires a service that uses "stream" mode as input and output): from("direct:grpc-request-stream") .to("grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=PingAsyncAsync&producerStrategy=STREAMING&streamRepliesTo=direct:grpc-response-stream"); from("direct:grpc-response-stream") .log("Response received: USD{body}"); gRPC service consumer TLS/SSL security negotiation enabled: from("grpc://localhost:1101/org.apache.camel.component.grpc.PingPong?consumerStrategy=PROPAGATION&negotiationType=TLS&keyCertChainResource=file:src/test/resources/certs/server.pem&keyResource=file:src/test/resources/certs/server.key&trustCertCollectionResource=file:src/test/resources/certs/ca.pem") .to("direct:tls-enable") gRPC service producer with custom JSON Web Token (JWT) implementation authentication: from("direct:grpc-jwt") .to("grpc://localhost:1101/org.apache.camel.component.grpc.PingPong?method=pingSyncSync&synchronous=true&authenticationType=JWT&jwtSecret=supersecuredsecret"); 45.10. Configuration Use the protobuf-maven-plugin , which calls the Protocol Buffer Compiler (protoc) to generate Java source files from .proto (protocol buffer definition) files. This plugin will generate procedures request and response classes, their builders and gRPC procedures stubs classes as well. Following steps are required: Insert operating system and CPU architecture detection extension inside <build> tag of the project pom.xml or set USD\{os.detected.classifier} parameter manually <extensions> <extension> <groupId>kr.motd.maven</groupId> <artifactId>os-maven-plugin</artifactId> <version>1.7.1</version> </extension> </extensions> Insert the gRPC and protobuf Java code generator plugins into the <plugins> tag of the project's pom.xml . <plugin> <groupId>org.xolstice.maven.plugins</groupId> <artifactId>protobuf-maven-plugin</artifactId> <version>0.6.1</version> <configuration> <protocArtifact>com.google.protobuf:protoc:USD{protobuf-version}:exe:USD{os.detected.classifier}</protocArtifact> <pluginId>grpc-java</pluginId> <pluginArtifact>io.grpc:protoc-gen-grpc-java:USD{grpc-version}:exe:USD{os.detected.classifier}</pluginArtifact> </configuration> <executions> <execution> <goals> <goal>compile</goal> <goal>compile-custom</goal> <goal>test-compile</goal> <goal>test-compile-custom</goal> </goals> </execution> </executions> </plugin> 45.11. Additional resources See these resources: gRPC project site Maven Protocol Buffers Plugin 45.12. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.grpc.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.grpc.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.grpc.enabled Whether to enable auto configuration of the grpc component. This is enabled by default. Boolean camel.component.grpc.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-grpc-starter</artifactId> </dependency>", "grpc:host:port/service[?options]", "grpc:host:port/service", "from(\"direct:grpc-sync\") .to(\"grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=sendPing&synchronous=true\");", "<route> <from uri=\"direct:grpc-sync\" /> <to uri=\"grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=sendPing&synchronous=true\"/> </route>", "from(\"direct:grpc-async\") .to(\"grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=pingAsyncResponse\");", "from(\"grpc://localhost:1101/org.apache.camel.component.grpc.PingPong?consumerStrategy=PROPAGATION\") .to(\"direct:grpc-service\");", "from(\"direct:grpc-request-stream\") .to(\"grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=PingAsyncAsync&producerStrategy=STREAMING&streamRepliesTo=direct:grpc-response-stream\"); from(\"direct:grpc-response-stream\") .log(\"Response received: USD{body}\");", "from(\"grpc://localhost:1101/org.apache.camel.component.grpc.PingPong?consumerStrategy=PROPAGATION&negotiationType=TLS&keyCertChainResource=file:src/test/resources/certs/server.pem&keyResource=file:src/test/resources/certs/server.key&trustCertCollectionResource=file:src/test/resources/certs/ca.pem\") .to(\"direct:tls-enable\")", "from(\"direct:grpc-jwt\") .to(\"grpc://localhost:1101/org.apache.camel.component.grpc.PingPong?method=pingSyncSync&synchronous=true&authenticationType=JWT&jwtSecret=supersecuredsecret\");", "<extensions> <extension> <groupId>kr.motd.maven</groupId> <artifactId>os-maven-plugin</artifactId> <version>1.7.1</version> </extension> </extensions>", "<plugin> <groupId>org.xolstice.maven.plugins</groupId> <artifactId>protobuf-maven-plugin</artifactId> <version>0.6.1</version> <configuration> <protocArtifact>com.google.protobuf:protoc:USD{protobuf-version}:exe:USD{os.detected.classifier}</protocArtifact> <pluginId>grpc-java</pluginId> <pluginArtifact>io.grpc:protoc-gen-grpc-java:USD{grpc-version}:exe:USD{os.detected.classifier}</pluginArtifact> </configuration> <executions> <execution> <goals> <goal>compile</goal> <goal>compile-custom</goal> <goal>test-compile</goal> <goal>test-compile-custom</goal> </goals> </execution> </executions> </plugin>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-grpc-component-starter
23.16. Write Changes to Disk
23.16. Write Changes to Disk The installer prompts you to confirm the partitioning options that you selected. Click Write changes to disk to allow the installer to partition your hard drive and install Red Hat Enterprise Linux. Figure 23.45. Writing storage configuration to disk If you are certain that you want to proceed, click Write changes to disk . Warning Up to this point in the installation process, the installer has made no lasting changes to your computer. When you click Write changes to disk , the installer will allocate space on your hard drive and start to transfer Red Hat Enterprise Linux into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer. To revise any of the choices that you made up to this point, click Go back . To cancel installation completely, switch off your computer. After you click Write changes to disk , allow the installation process to complete. If the process is interrupted (for example, by you switching off or resetting the computer, or by a power outage) you will probably not be able to use your computer until you restart and complete the Red Hat Enterprise Linux installation process, or install a different operating system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Write_changes_to_disk-s390
Planning your deployment
Planning your deployment Red Hat OpenShift Data Foundation 4.18 Important considerations when deploying Red Hat OpenShift Data Foundation 4.18 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/planning_your_deployment/index
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide . Disaster recovery requirements [Technology Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_amazon_web_services/preparing_to_deploy_openshift_data_foundation
Chapter 4. Clair on OpenShift Container Platform
Chapter 4. Clair on OpenShift Container Platform To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator installs or upgrades a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-quay-operator-overview
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_execution_api_overview/providing-feedback
Chapter 4. Using SSL to protect connections to Red Hat Quay
Chapter 4. Using SSL to protect connections to Red Hat Quay 4.1. Using SSL/TLS To configure Red Hat Quay with a self-signed certificate, you must create a Certificate Authority (CA) and a primary key file named ssl.cert and ssl.key . 4.2. Creating a Certificate Authority Use the following procedure to set up your own CA and use it to issue a server certificate for your domain. This allows you to secure communications with SSL/TLS using your own certificates. Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []: Create a configuration file openssl.cnf , specifying the server hostname, for example: Example openssl.cnf file [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf Confirm your created certificates and files by entering the following command: USD ls /path/to/certificates Example output rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr 4.3. Configuring custom SSL/TLS certificates by using the command line interface SSL/TLS must be configured by using the command-line interface (CLI) and updating your config.yaml file manually. Prerequisites You have created a certificate authority and signed the certificate. Procedure Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory Navigate to the configuration directory by entering the following command: USD cd /path/to/configuration_directory Edit the config.yaml file and specify that you want Red Hat Quay to handle SSL/TLS: Example config.yaml file # ... SERVER_HOSTNAME: <quay-server.example.com> ... PREFERRED_URL_SCHEME: https # ... Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command: USD cat rootCA.pem >> ssl.cert Stop the Quay container by entering the following command: USD sudo podman stop <quay_container_name> Restart the registry by entering the following command: 4.4. Configuring SSL/TLS using the Red Hat Quay UI Use the following procedure to configure SSL/TLS using the Red Hat Quay UI. To configure SSL/TLS using the command line interface, see "Configuring SSL/TLS using the command line interface". Prerequisites You have created a certificate authority and signed a certificate. Procedure Start the Quay container in configuration mode: In the Server Configuration section, select Red Hat Quay handles TLS for SSL/TLS. Upload the certificate file and private key file created earlier, ensuring that the Server Hostname matches the value used when the certificates were created. Validate and download the updated configuration. Stop the Quay container and then restart the registry by entering the following command: 4.5. Testing the SSL/TLS configuration using the CLI Your SSL/TLS configuration can be tested by using the command-line interface (CLI). Use the following procedure to test your SSL/TLS configuration. Use the following procedure to test your SSL/TLS configuration using the CLI. Procedure Enter the following command to attempt to log in to the Red Hat Quay registry with SSL/TLS enabled: USD sudo podman login quay-server.example.com Example output Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority Because Podman does not trust self-signed certificates, you must use the --tls-verify=false option: USD sudo podman login --tls-verify=false quay-server.example.com Example output Login Succeeded! In a subsequent section, you will configure Podman to trust the root Certificate Authority. 4.6. Testing the SSL/TLS configuration using a browser Use the following procedure to test your SSL/TLS configuration using a browser. Procedure Navigate to your Red Hat Quay registry endpoint, for example, https://quay-server.example.com . If configured correctly, the browser warns of the potential risk: Proceed to the log in screen. The browser notifies you that the connection is not secure. For example: In the following section, you will configure Podman to trust the root Certificate Authority. 4.7. Configuring Podman to trust the Certificate Authority Podman uses two paths to locate the Certificate Authority (CA) file: /etc/containers/certs.d/ and /etc/docker/certs.d/ . Use the following procedure to configure Podman to trust the CA. Procedure Copy the root CA file to one of /etc/containers/certs.d/ or /etc/docker/certs.d/ . Use the exact path determined by the server hostname, and name the file ca.crt : USD sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt Verify that you no longer need to use the --tls-verify=false option when logging in to your Red Hat Quay registry: USD sudo podman login quay-server.example.com Example output Login Succeeded! 4.8. Configuring the system to trust the certificate authority Use the following procedure to configure your system to trust the certificate authority. Procedure Enter the following command to copy the rootCA.pem file to the consolidated system-wide trust store: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the system-wide trust store configuration: USD sudo update-ca-trust extract Optional. You can use the trust list command to ensure that the Quay server has been configured: USD trust list | grep quay label: quay-server.example.com Now, when you browse to the registry at https://quay-server.example.com , the lock icon shows that the connection is secure: To remove the rootCA.pem file from system-wide trust, delete the file and update the configuration: USD sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem USD sudo update-ca-trust extract USD trust list | grep quay More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates .
[ "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "ls /path/to/certificates", "rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr", "cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory", "cd /path/to/configuration_directory", "SERVER_HOSTNAME: <quay-server.example.com> PREFERRED_URL_SCHEME: https", "cat rootCA.pem >> ssl.cert", "sudo podman stop <quay_container_name>", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3", "sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.13.3 config secret", "sudo podman rm -f quay sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3", "sudo podman login quay-server.example.com", "Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority", "sudo podman login --tls-verify=false quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt", "sudo podman login quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "trust list | grep quay label: quay-server.example.com", "sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem", "sudo update-ca-trust extract", "trust list | grep quay" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/using-ssl-to-protect-quay
Chapter 4. Using external storage
Chapter 4. Using external storage Organizations can have databases containing information, passwords, and other credentials. Typically, you cannot migrate existing data storage to a Red Hat build of Keycloak deployment so Red Hat build of Keycloak can federate existing external user databases. Red Hat build of Keycloak supports LDAP and Active Directory, but you can also code extensions for any custom user database by using the Red Hat build of Keycloak User Storage SPI. When a user attempts to log in, Red Hat build of Keycloak examines that user's storage to find that user. If Red Hat build of Keycloak does not find the user, Red Hat build of Keycloak iterates over each User Storage provider for the realm until it finds a match. Data from the external data storage then maps into a standard user model the Red Hat build of Keycloak runtime consumes. This user model then maps to OIDC token claims and SAML assertion attributes. External user databases rarely have the data necessary to support all the features of Red Hat build of Keycloak, so the User Storage Provider can opt to store items locally in Red Hat build of Keycloak user data storage. Providers can import users locally and sync periodically with external data storage. This approach depends on the capabilities of the provider and the configuration of the provider. For example, your external user data storage may not support OTP. The OTP can be handled and stored by Red Hat build of Keycloak, depending on the provider. 4.1. Adding a provider To add a storage provider, perform the following procedure: Procedure Click User Federation in the menu. User federation Select the provider type card from the listed cards. Red Hat build of Keycloak brings you to that provider's configuration page. 4.2. Dealing with provider failures If a User Storage Provider fails, you may not be able to log in and view users in the Admin Console. Red Hat build of Keycloak does not detect failures when using a Storage Provider to look up a user, so it cancels the invocation. If you have a Storage Provider with a high priority that fails during user lookup, the login or user query fails with an exception and will not fail over to the configured provider. Red Hat build of Keycloak searches the local Red Hat build of Keycloak user database first to resolve users before any LDAP or custom User Storage Provider. Consider creating an administrator account stored in the local Red Hat build of Keycloak user database in case of problems connecting to your LDAP and back ends. Each LDAP and custom User Storage Provider has an enable toggle on its Admin Console page. Disabling the User Storage Provider skips the provider when performing queries, so you can view and log in with user accounts in a different provider with lower priority. If your provider uses an import strategy and is disabled, imported users are still available for lookup in read-only mode. When a Storage Provider lookup fails, Red Hat build of Keycloak does not fail over because user databases often have duplicate usernames or duplicate emails between them. Duplicate usernames and emails can cause problems because the user loads from one external data store when the admin expects them to load from another data store. 4.3. Lightweight Directory Access Protocol (LDAP) and Active Directory Red Hat build of Keycloak includes an LDAP/AD provider. You can federate multiple different LDAP servers in one Red Hat build of Keycloak realm and map LDAP user attributes into the Red Hat build of Keycloak common user model. By default, Red Hat build of Keycloak maps the username, email, first name, and last name of the user account, but you can also configure additional mappings . Red Hat build of Keycloak's LDAP/AD provider supports password validation using LDAP/AD protocols and storage, edit, and synchronization modes. 4.3.1. Configuring federated LDAP storage Procedure Click User Federation in the menu. User federation Click Add LDAP providers . Red Hat build of Keycloak brings you to the LDAP configuration page. 4.3.2. Storage mode Red Hat build of Keycloak imports users from LDAP into the local Red Hat build of Keycloak user database. This copy of the user database synchronizes on-demand or through a periodic background task. An exception exists for synchronizing passwords. Red Hat build of Keycloak never imports passwords. Password validation always occurs on the LDAP server. The advantage of synchronization is that all Red Hat build of Keycloak features work efficiently because any required extra per-user data is stored locally. The disadvantage is that each time Red Hat build of Keycloak queries a specific user for the first time, Red Hat build of Keycloak performs a corresponding database insert. You can synchronize the import with your LDAP server. Import synchronization is unnecessary when LDAP mappers always read particular attributes from the LDAP rather than the database. You can use LDAP with Red Hat build of Keycloak without importing users into the Red Hat build of Keycloak user database. The LDAP server backs up the common user model that the Red Hat build of Keycloak runtime uses. If LDAP does not support data that a Red Hat build of Keycloak feature requires, that feature will not work. The advantage of this approach is that you do not have the resource usage of importing and synchronizing copies of LDAP users into the Red Hat build of Keycloak user database. The Import Users switch on the LDAP configuration page controls this storage mode. To import users, toggle this switch to ON . Note If you disable Import Users , you cannot save user profile attributes into the Red Hat build of Keycloak database. Also, you cannot save metadata except for user profile metadata mapped to the LDAP. This metadata can include role mappings, group mappings, and other metadata based on the LDAP mappers' configuration. When you attempt to change the non-LDAP mapped user data, the user update is not possible. For example, you cannot disable the LDAP mapped user unless the user's enabled flag maps to an LDAP attribute. 4.3.3. Edit mode Users and admins can modify user metadata, users through the Account Console , and administrators through the Admin Console. The Edit Mode configuration on the LDAP configuration page defines the user's LDAP update privileges. READONLY You cannot change the username, email, first name, last name, and other mapped attributes. Red Hat build of Keycloak shows an error anytime a user attempts to update these fields. Password updates are not supported. WRITABLE You can change the username, email, first name, last name, and other mapped attributes and passwords and synchronize them automatically with the LDAP store. UNSYNCED Red Hat build of Keycloak stores changes to the username, email, first name, last name, and passwords in Red Hat build of Keycloak local storage, so the administrator must synchronize this data back to LDAP. In this mode, Red Hat build of Keycloak deployments can update user metadata on read-only LDAP servers. This option also applies when importing users from LDAP into the local Red Hat build of Keycloak user database. Note When Red Hat build of Keycloak creates the LDAP provider, Red Hat build of Keycloak also creates a set of initial LDAP mappers . Red Hat build of Keycloak configures these mappers based on a combination of the Vendor , Edit Mode , and Import Users switches. For example, when edit mode is UNSYNCED, Red Hat build of Keycloak configures the mappers to read a particular user attribute from the database and not from the LDAP server. However, if you later change the edit mode, the mapper's configuration does not change because it is impossible to detect if the configuration changes changed in UNSYNCED mode. Decide the Edit Mode when creating the LDAP provider. This note applies to Import Users switch also. 4.3.4. Other configuration options Console Display Name The name of the provider to display in the admin console. Priority The priority of the provider when looking up users or adding a user. Sync Registrations Toggle this switch to ON if you want new users created by Red Hat build of Keycloak added to LDAP. Allow Kerberos authentication Enable Kerberos/SPNEGO authentication in the realm with user data provisioned from LDAP. For more information, see the Kerberos section . Other options Hover the mouse pointer over the tooltips in the Admin Console to see more details about these options. 4.3.5. Connecting to LDAP over SSL When you configure a secure connection URL to your LDAP store (for example, ldaps://myhost.com:636 ), Red Hat build of Keycloak uses SSL to communicate with the LDAP server. Configure a truststore on the Red Hat build of Keycloak server side so that Red Hat build of Keycloak can trust the SSL connection to LDAP. Configure the global truststore for Red Hat build of Keycloak with the Truststore SPI. For more information about configuring the global truststore, see the Configuring a Truststore chapter. If you do not configure the Truststore SPI, the truststore falls back to the default mechanism provided by Java, which can be the file supplied by the javax.net.ssl.trustStore system property or the cacerts file from the JDK if the system property is unset. The Use Truststore SPI configuration property, in the LDAP federation provider configuration, controls the truststore SPI. By default, Red Hat build of Keycloak sets the property to Always , which is adequate for most deployments. Red Hat build of Keycloak uses the Truststore SPI if the connection URL to LDAP starts with ldaps only. 4.3.6. Synchronizing LDAP users to Red Hat build of Keycloak If you set the Import Users option, the LDAP Provider handles importing LDAP users into the Red Hat build of Keycloak local database. The first time a user logs in or is returned as part of a user query (e.g. using the search field in the admin console), the LDAP provider imports the LDAP user into the Red Hat build of Keycloak database. During authentication, the LDAP password is validated. If you want to sync all LDAP users into the Red Hat build of Keycloak database, configure and enable the Sync Settings on the LDAP provider configuration page. Two types of synchronization exist: Periodic Full sync This type synchronizes all LDAP users into the Red Hat build of Keycloak database. The LDAP users already in Red Hat build of Keycloak, but different in LDAP, directly update in the Red Hat build of Keycloak database. Periodic Changed users sync When synchronizing, Red Hat build of Keycloak creates or updates users created or updated after the last sync only. The best way to synchronize is to click Synchronize all users when you first create the LDAP provider, then set up periodic synchronization of changed users. 4.3.7. LDAP mappers LDAP mappers are listeners triggered by the LDAP Provider. They provide another extension point to LDAP integration. LDAP mappers are triggered when: Users log in by using LDAP. Users initially register. The Admin Console queries a user. When you create an LDAP Federation provider, Red Hat build of Keycloak automatically provides a set of mappers for this provider. This set is changeable by users, who can also develop mappers or update/delete existing ones. User Attribute Mapper This mapper specifies which LDAP attribute maps to the attribute of the Red Hat build of Keycloak user. For example, you can configure the mail LDAP attribute to the email attribute in the Red Hat build of Keycloak database. For this mapper implementation, a one-to-one mapping always exists. FullName Mapper This mapper specifies the full name of the user. Red Hat build of Keycloak saves the name in an LDAP attribute (usually cn ) and maps the name to the firstName and lastname attributes in the Red Hat build of Keycloak database. Having cn to contain the full name of the user is common for LDAP deployments. Note When you register new users in Red Hat build of Keycloak and Sync Registrations is ON for the LDAP provider, the fullName mapper permits falling back to the username. This fallback is useful when using Microsoft Active Directory (MSAD). The common setup for MSAD is to configure the cn LDAP attribute as fullName and, at the same time, use the cn LDAP attribute as the RDN LDAP Attribute in the LDAP provider configuration. With this setup, Red Hat build of Keycloak falls back to the username. For example, if you create Red Hat build of Keycloak user "john123" and leave firstName and lastName empty, then the fullname mapper saves "john123" as the value of the cn in LDAP. When you enter "John Doe" for firstName and lastName later, the fullname mapper updates LDAP cn to the "John Doe" value as falling back to the username is unnecessary. Hardcoded Attribute Mapper This mapper adds a hardcoded attribute value to each Red Hat build of Keycloak user linked with LDAP. This mapper can also force values for the enabled or emailVerified user properties. Role Mapper This mapper configures role mappings from LDAP into Red Hat build of Keycloak role mappings. A single role mapper can map LDAP roles (usually groups from a particular branch of the LDAP tree) into roles corresponding to a specified client's realm roles or client roles. You can configure more Role mappers for the same LDAP provider. For example, you can specify that role mappings from groups under ou=main,dc=example,dc=org map to realm role mappings, and role mappings from groups under ou=finance,dc=example,dc=org map to client role mappings of client finance . Hardcoded Role Mapper This mapper grants a specified Red Hat build of Keycloak role to each Red Hat build of Keycloak user from the LDAP provider. Group Mapper This mapper maps LDAP groups from a branch of an LDAP tree into groups within Red Hat build of Keycloak. This mapper also propagates user-group mappings from LDAP into user-group mappings in Red Hat build of Keycloak. MSAD User Account Mapper This mapper is specific to Microsoft Active Directory (MSAD). It can integrate the MSAD user account state into the Red Hat build of Keycloak account state, such as enabled account or expired password. This mapper uses the userAccountControl , and pwdLastSet LDAP attributes, specific to MSAD and are not the LDAP standard. For example, if the value of pwdLastSet is 0 , the Red Hat build of Keycloak user must update their password. The result is an UPDATE_PASSWORD required action added to the user. If the value of userAccountControl is 514 (disabled account), the Red Hat build of Keycloak user is disabled. Certificate Mapper This mapper maps X.509 certificates. Red Hat build of Keycloak uses it in conjunction with X.509 authentication and Full certificate in PEM format as an identity source. This mapper behaves similarly to the User Attribute Mapper , but Red Hat build of Keycloak can filter for an LDAP attribute storing a PEM or DER format certificate. Enable Always Read Value From LDAP with this mapper. User Attribute mappers that map basic Red Hat build of Keycloak user attributes, such as username, firstname, lastname, and email, to corresponding LDAP attributes. You can extend these and provide your own additional attribute mappings. The Admin Console provides tooltips to help with configuring the corresponding mappers. 4.3.8. Password hashing When Red Hat build of Keycloak updates a password, Red Hat build of Keycloak sends the password in plain-text format. This action is different from updating the password in the built-in Red Hat build of Keycloak database, where Red Hat build of Keycloak hashes and salts the password before sending it to the database. For LDAP, Red Hat build of Keycloak relies on the LDAP server to hash and salt the password. By default, LDAP servers such as MSAD, RHDS, or FreeIPA hash and salt passwords. Other LDAP servers such as OpenLDAP or ApacheDS store the passwords in plain-text unless you use the LDAPv3 Password Modify Extended Operation as described in RFC3062 . Enable the LDAPv3 Password Modify Extended Operation in the LDAP configuration page. See the documentation of your LDAP server for more details. Warning Always verify that user passwords are properly hashed and not stored as plaintext by inspecting a changed directory entry using ldapsearch and base64 decode the userPassword attribute value. 4.3.9. Troubleshooting It is useful to increase the logging level to TRACE for the category org.keycloak.storage.ldap . With this setting, many logging messages are sent to the server log in the TRACE level, including the logging for all queries to the LDAP server and the parameters, which were used to send the queries. When you are creating any LDAP question on user forum or JIRA, consider attaching the server log with enabled TRACE logging. If it is too big, the good alternative is to include just the snippet from server log with the messages, which were added to the log during the operation, which causes the issues to you. When you create an LDAP provider, a message appears in the server log in the INFO level starting with: It shows the configuration of your LDAP provider. Before you are asking the questions or reporting bugs, it will be nice to include this message to show your LDAP configuration. Eventually feel free to replace some config changes, which you do not want to include, with some placeholder values. One example is bindDn=some-placeholder . For connectionUrl , feel free to replace it as well, but it is generally useful to include at least the protocol, which was used ( ldap vs ldaps )`. Similarly it can be useful to include the details for configuration of your LDAP mappers, which are displayed with the message like this at the DEBUG level: Note those messages are displayed just with the enabled DEBUG logging. For tracking the performance or connection pooling issues, consider setting the value of property Connection Pool Debug Level of the LDAP provider to value all . This will add lots of additional messages to server log with the included logging for the LDAP connection pooling. This can be used to track the issues related to connection pooling or performance. Note After changing the configuration of connection pooling, you may need to restart the Keycloak server to enforce re-initialization of the LDAP provider connection. If no more messages appear for connection pooling even after server restart, it can indicate that connection pooling does not work with your LDAP server. For the case of reporting LDAP issue, you may consider to attach some part of your LDAP tree with the target data, which causes issues in your environment. For example if login of some user takes lot of time, you can consider attach his LDAP entry showing count of member attributes of various "group" entries. In this case, it might be useful to add if those group entries are mapped to some Group LDAP mapper (or Role LDAP Mapper) in Red Hat build of Keycloak etc. 4.4. SSSD and FreeIPA Identity Management integration Red Hat build of Keycloak includes the System Security Services Daemon (SSSD) plugin. SSSD is part of the Fedora and Red Hat Enterprise Linux (RHEL), and it provides access to multiple identities and authentication providers. SSSD also provides benefits such as failover and offline support. For more information, see the Red Hat Enterprise Linux Identity Management documentation . SSSD integrates with the FreeIPA identity management (IdM) server, providing authentication and access control. With this integration, Red Hat build of Keycloak can authenticate against privileged access management (PAM) services and retrieve user data from SSSD. For more information about using Red Hat Identity Management in Linux environments, see the Red Hat Enterprise Linux Identity Management documentation . Red Hat build of Keycloak and SSSD communicate through read-only D-Bus interfaces. For this reason, the way to provision and update users is to use the FreeIPA/IdM administration interface. By default, the interface imports the username, email, first name, and last name. Note Red Hat build of Keycloak registers groups and roles automatically but does not synchronize them. Any changes made by the Red Hat build of Keycloak administrator in Red Hat build of Keycloak do not synchronize with SSSD. 4.4.1. FreeIPA/IdM server The FreeIPA Container image is available at Quay.io . To set up the FreeIPA server, see the FreeIPA documentation . Procedure Run your FreeIPA server using this command: docker run --name freeipa-server-container -it \ -h server.freeipa.local -e PASSWORD=YOUR_PASSWORD \ -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ -v /var/lib/ipa-data:/data:Z freeipa/freeipa-server The parameter -h with server.freeipa.local represents the FreeIPA/IdM server hostname. Change YOUR_PASSWORD to a password of your own. After the container starts, change the /etc/hosts file to include: x.x.x.x server.freeipa.local If you do not make this change, you must set up a DNS server. Use the following command to enroll your Linux server in the IPA domain so that the SSSD federation provider starts and runs on Red Hat build of Keycloak: ipa-client-install --mkhomedir -p admin -w password Run the following command on the client to verify the installation is working: kinit admin Enter your password. Add users to the IPA server using this command: USD ipa user-add <username> --first=<first name> --last=<surname> --email=<email address> --phone=<telephoneNumber> --street=<street> --city=<city> --state=<state> --postalcode=<postal code> --password Force set the user's password using kinit. kinit <username> Enter the following to restore normal IPA operation: kdestroy -A kinit admin 4.4.2. SSSD and D-Bus The federation provider obtains the data from SSSD using D-BUS. It authenticates the data using PAM. Procedure Install the sssd-dbus RPM. USD sudo yum install sssd-dbus Run the following provisioning script: USD bin/federation-sssd-setup.sh The script can also be used as a guide to configure SSSD and PAM for Red Hat build of Keycloak. It makes the following changes to /etc/sssd/sssd.conf : [domain/your-hostname.local] ... ldap_user_extra_attrs = mail:mail, sn:sn, givenname:givenname, telephoneNumber:telephoneNumber ... [sssd] services = nss, sudo, pam, ssh, ifp ... [ifp] allowed_uids = root, yourOSUsername user_attributes = +mail, +telephoneNumber, +givenname, +sn The ifp service is added to SSSD and configured to allow the OS user to interrogate the IPA server through this interface. The script also creates a new PAM service /etc/pam.d/keycloak to authenticate users via SSSD: auth required pam_sss.so account required pam_sss.so Run dbus-send to ensure the setup is successful. dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserAttr string:<username> array:string:mail,givenname,sn,telephoneNumber dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserGroups string:<username> If the setup is successful, each command displays the user's attributes and groups respectively. If there is a timeout or an error, the federation provider running on Red Hat build of Keycloak cannot retrieve any data. This error usually happens because the server is not enrolled in the FreeIPA IdM server, or does not have permission to access the SSSD service. If you do not have permission to access the SSSD service, ensure that the user running the Red Hat build of Keycloak server is in the /etc/sssd/sssd.conf file in the following section: [ifp] allowed_uids = root, yourOSUsername And the ipaapi system user is created inside the host. This user is necessary for the ifp service. Check the user is created in the system. grep ipaapi /etc/passwd ipaapi:x:992:988:IPA Framework User:/:/sbin/nologin 4.4.3. Enabling the SSSD federation provider Red Hat build of Keycloak uses DBus-Java project to communicate at a low level with D-Bus and JNA to authenticate via Operating System Pluggable Authentication Modules (PAM). Although now Red Hat build of Keycloak contains all the needed libraries to run the SSSD provider, JDK version 17 is needed. Therefore the SSSD provider will only be displayed when the host configuration is correct and JDK 17 is used to run Red Hat build of Keycloak. 4.4.4. Configuring a federated SSSD store After the installation, configure a federated SSSD store. Procedure Click User Federation in the menu. If everything is setup successfully the Add Sssd providers button will be displayed in the page. Click on it. Assign a name to the new provider. Click Save . You can now authenticate against Red Hat build of Keycloak using a FreeIPA/IdM user and credentials. 4.5. Custom providers Red Hat build of Keycloak does have a Service Provider Interface (SPI) for User Storage Federation to develop custom providers. You can find documentation on developing customer providers in the Server Developer Guide .
[ "Creating new LDAP Store for the LDAP storage provider:", "Mapper for provider: XXX, Mapper name: YYY, Provider: ZZZ", "docker run --name freeipa-server-container -it -h server.freeipa.local -e PASSWORD=YOUR_PASSWORD -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /var/lib/ipa-data:/data:Z freeipa/freeipa-server", "x.x.x.x server.freeipa.local", "ipa-client-install --mkhomedir -p admin -w password", "kinit admin", "ipa user-add <username> --first=<first name> --last=<surname> --email=<email address> --phone=<telephoneNumber> --street=<street> --city=<city> --state=<state> --postalcode=<postal code> --password", "kinit <username>", "kdestroy -A kinit admin", "sudo yum install sssd-dbus", "bin/federation-sssd-setup.sh", "[domain/your-hostname.local] ldap_user_extra_attrs = mail:mail, sn:sn, givenname:givenname, telephoneNumber:telephoneNumber [sssd] services = nss, sudo, pam, ssh, ifp [ifp] allowed_uids = root, yourOSUsername user_attributes = +mail, +telephoneNumber, +givenname, +sn", "auth required pam_sss.so account required pam_sss.so", "dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserAttr string:<username> array:string:mail,givenname,sn,telephoneNumber dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserGroups string:<username>", "[ifp] allowed_uids = root, yourOSUsername", "grep ipaapi /etc/passwd ipaapi:x:992:988:IPA Framework User:/:/sbin/nologin" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/user-storage-federation
Chapter 6. Summarizing cluster specifications
Chapter 6. Summarizing cluster specifications 6.1. Summarizing cluster specifications by using a cluster version object You can obtain a summary of OpenShift Container Platform cluster specifications by querying the clusterversion resource. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Query cluster version, availability, uptime, and general status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8 Obtain a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion Example output Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion # ... Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8 # ...
[ "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8", "oc describe clusterversion", "Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/support/summarizing-cluster-specifications
13.9. Language Support
13.9. Language Support To install support for additional locales and language dialects, select Language Support from the Installation Summary screen. Use your mouse to select the language for which you would like to install support. In the left panel, select your language of choice, for example Espanol . Then you can select a locale specific to your region in the right panel, for example Espanol (Costa Rica) . You can select multiple languages and multiple locales. The selected languages are highlighted in bold in the left panel. Figure 13.6. Configuring Language Support Once you have made your selections, click Done to return to the Installation Summary screen. Note To change your language support configuration after you have completed the installation, visit the Region & Language section of the Settings dialog window.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-language-support-ppc
Chapter 10. FS-Cache
Chapter 10. FS-Cache FS-Cache is a persistent local cache that can be used by file systems to take data retrieved from over the network and cache it on local disk. This helps minimize network traffic for users accessing data from a file system mounted over the network (for example, NFS). The following diagram is a high-level illustration of how FS-Cache works: Figure 10.1. FS-Cache Overview FS-Cache is designed to be as transparent as possible to the users and administrators of a system. Unlike cachefs on Solaris, FS-Cache allows a file system on a server to interact directly with a client's local cache without creating an overmounted file system. With NFS, a mount option instructs the client to mount the NFS share with FS-cache enabled. FS-Cache does not alter the basic operation of a file system that works over the network - it merely provides that file system with a persistent place in which it can cache data. For instance, a client can still mount an NFS share whether or not FS-Cache is enabled. In addition, cached NFS can handle files that won't fit into the cache (whether individually or collectively) as files can be partially cached and do not have to be read completely up front. FS-Cache also hides all I/O errors that occur in the cache from the client file system driver. To provide caching services, FS-Cache needs a cache back-end . A cache back-end is a storage driver configured to provide caching services (i.e. cachefiles ). In this case, FS-Cache requires a mounted block-based file system that supports bmap and extended attributes (e.g. ext3) as its cache back-end. FS-Cache cannot arbitrarily cache any file system, whether through the network or otherwise: the shared file system's driver must be altered to allow interaction with FS-Cache, data storage/retrieval, and metadata set up and validation. FS-Cache needs indexing keys and coherency data from the cached file system to support persistence: indexing keys to match file system objects to cache objects, and coherency data to determine whether the cache objects are still valid. Note Starting with Red Hat Enterprise Linux 6.2, cachefilesd is not installed by default and needs to be installed manually. 10.1. Performance Guarantee FS-Cache does not guarantee increased performance, however it ensures consistent performance by avoiding network congestion. Using a cache back-end incurs a performance penalty: for example, cached NFS shares add disk accesses to cross-network lookups. While FS-Cache tries to be as asynchronous as possible, there are synchronous paths (e.g. reads) where this isn't possible. For example, using FS-Cache to cache an NFS share between two computers over an otherwise unladen GigE network will not demonstrate any performance improvements on file access. Rather, NFS requests would be satisfied faster from server memory rather than from local disk. The use of FS-Cache, therefore, is a compromise between various factors. If FS-Cache is being used to cache NFS traffic, for instance, it may slow the client down a little, but massively reduce the network and server loading by satisfying read requests locally without consuming network bandwidth.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-fscache
Node APIs
Node APIs OpenShift Container Platform 4.18 Reference guide for node APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/node_apis/index
Appendix A. Versioning information
Appendix A. Versioning information Documentation last updated on Thursday, March 14th, 2024.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/versioning-information
5.9.4.2. SMB
5.9.4.2. SMB SMB stands for Server Message Block and is the name for the communications protocol used by various operating systems produced by Microsoft over the years. SMB makes it possible to share storage across a network. Present-day implementations often use TCP/IP as the underlying transports; previously NetBEUI was the transport. Red Hat Enterprise Linux supports SMB via the Samba server program. The System Administrators Guide includes information on configuring Samba.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-net-smb
Chapter 4. User-provisioned infrastructure
Chapter 4. User-provisioned infrastructure 4.1. Preparing to install a cluster on AWS You prepare to install an OpenShift Container Platform cluster on AWS by completing the following steps: Verifying internet connectivity for your cluster. Configuring an AWS account . Downloading the installation program. Note If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation . Installing the OpenShift CLI ( oc ). Note If you are installing in a disconnected environment, install oc to the mirror host. Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. Preparing the user-provisioned infrastructure. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, manually creating long-term credentials for AWS or configuring an AWS cluster to use short-term credentials with Amazon Web Services Security Token Service (AWS STS). 4.1.1. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.1.2. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.1.3. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.1.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.1.5. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 4.2. Installation requirements for user-provisioned infrastructure on AWS Before you begin an installation on infrastructure that you provision, be sure that your AWS environment meets the following installation requirements. For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 4.2.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 4.2.1.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.2.1.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 4.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 4.2.1.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 4.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 4.2.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 4.2.3. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 4.2.3.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 4.2.3.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 4.2.4. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 4.3. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribePublicIpv4Pools (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:DisassociateAddress (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 4.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 4.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener elasticloadbalancing:SetSecurityGroups Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 4.6. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagInstanceProfile iam:TagRole Note If you specify an existing IAM role in the install-config.yaml file, the following IAM permissions are not required: iam:CreateRole , iam:DeleteRole , iam:DeleteRolePolicy , and iam:PutRolePolicy . If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 4.7. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 4.8. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketTagging s3:PutEncryptionConfiguration Example 4.9. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 4.10. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 4.11. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 4.12. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 4.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 4.14. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 4.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 4.16. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole Example 4.17. Required permissions for enabling Bring your own public IPv4 addresses (BYOIP) feature for installation ec2:DescribePublicIpv4Pools ec2:DisassociateAddress 4.2.5. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . 4.3. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 4.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You prepared the user-provisioned infrastructure. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 4.3.2. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 4.3.2.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 4.3.2.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . If you are installing a three-node cluster, modify the install-config.yaml file by setting the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 4.3.2.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.3.2.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 4.3.3. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 4.3.4. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 4.3.4.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 4.18. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.5. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 4.3.5.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 4.19. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.11" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.11" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . You can view details about your hosted zones by navigating to the AWS Route 53 console . See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 4.3.6. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 4.3.6.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 4.20. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.7. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 4.3.8. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 4.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-018de6b1be470b7e6 ap-east-1 ami-092f09a4c8aacf56a ap-northeast-1 ami-001c9fc85b77505f4 ap-northeast-2 ami-0a5d7f1966d88fba3 ap-northeast-3 ami-085a2726ceb4f5eeb ap-south-1 ami-03f2c19071fb6eab3 ap-south-2 ami-0c82522890979d94b ap-southeast-1 ami-06e5b9d16ead6c55c ap-southeast-2 ami-0ccd429129fbf6bc0 ap-southeast-3 ami-096f9b4d41116d95c ap-southeast-4 ami-0b511ab55f9f7d052 ca-central-1 ami-0e357287c651be1f2 ca-west-1 ami-0b1692e5776740901 eu-central-1 ami-08b13fb388ba5610d eu-central-2 ami-04777298b55045ea9 eu-north-1 ami-05421993a4ee567b9 eu-south-1 ami-00959341869d34746 eu-south-2 ami-0a745b610522a87cc eu-west-1 ami-0e48f85ec57bd7baa eu-west-2 ami-0c015b1387acddc5e eu-west-3 ami-01e466af77a95b93d il-central-1 ami-0a1b706793b54d087 me-central-1 ami-09d58e8b2099ae5bc me-south-1 ami-0f9c259bc4346599c sa-east-1 ami-06277517d966881a3 us-east-1 ami-05483066c3caaccf5 us-east-2 ami-0c704ce516493a479 us-gov-east-1 ami-0695b2cbdd1ec4d48 us-gov-west-1 ami-004404a0eaa427b39 us-west-1 ami-0aaa56637063be772 us-west-2 ami-0870c7a3d5ef4d2b3 Table 4.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-0d96d447d3df14f4e ap-east-1 ami-010236402dfba1717 ap-northeast-1 ami-08feeb6d9068bb12e ap-northeast-2 ami-0772082c119147205 ap-northeast-3 ami-05bcbb13493d0516f ap-south-1 ami-0034a539e8adcb817 ap-south-2 ami-0fc56b7c53c38ff70 ap-southeast-1 ami-0b50a0e92a58f3ad8 ap-southeast-2 ami-0697a9174ae206182 ap-southeast-3 ami-05ae309319a7ad504 ap-southeast-4 ami-00500414843baa657 ca-central-1 ami-05e76bf99d3b0d4c8 ca-west-1 ami-099fc953c221ec590 eu-central-1 ami-0d2e2698884020b71 eu-central-2 ami-0a96ba21ddee01719 eu-north-1 ami-0ab8a1070d0b1d9a5 eu-south-1 ami-0d0465299a8b196c1 eu-south-2 ami-07d5aa3c6d468c738 eu-west-1 ami-08e7d209f26c09c80 eu-west-2 ami-0c8336d4e2c1f13cb eu-west-3 ami-085d804b98acf0648 il-central-1 ami-02ce030e36aeb7643 me-central-1 ami-0dea3ac466040b77f me-south-1 ami-02ef2a46887b9786d sa-east-1 ami-0d19817d31b2399f9 us-east-1 ami-04859c888566a2c2f us-east-2 ami-02f7e4a5dc66361bb us-gov-east-1 ami-067ef782980ea96ad us-gov-west-1 ami-0ce67540c1bb2319d us-west-1 ami-0a09c5d2f287718a3 us-west-2 ami-06b94e64e595f6cee 4.3.8.1. AWS regions without a published RHCOS AMI You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs. A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file. 4.3.8.2. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.16.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 4.3.9. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 4.3.9.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 4.21. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 4.3.10. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 4.3.10.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 4.22. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.11. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 4.3.11.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 4.23. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.12. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. You can view details about the running instances that are created by using the AWS EC2 console . 4.3.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.3.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.3.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. 4.3.15.1. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information. 4.3.15.1.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 4.3.15.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.3.16. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 4.3.17. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 4.3.18. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.3.19. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.3.20. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 4.3.21. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 4.4. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) using infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the AWS APIs. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 4.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You prepared the user-provisioned infrastructure. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 4.4.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 4.4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.4.3. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 4.4.3.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 4.4.3.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- Add the image content resources: imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Use the imageContentSources section from the output of the command to mirror the repository or the values that you used when you mirrored the content from the media that you brought into your restricted network. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 4.4.3.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.4.3.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Manually creating long-term credentials 4.4.4. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 4.4.5. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 4.4.5.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 4.24. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 4.4.6. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 4.4.6.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 4.25. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.11" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.11" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 4.4.7. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 4.4.7.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 4.26. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile 4.4.8. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 4.4.9. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 4.5. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-018de6b1be470b7e6 ap-east-1 ami-092f09a4c8aacf56a ap-northeast-1 ami-001c9fc85b77505f4 ap-northeast-2 ami-0a5d7f1966d88fba3 ap-northeast-3 ami-085a2726ceb4f5eeb ap-south-1 ami-03f2c19071fb6eab3 ap-south-2 ami-0c82522890979d94b ap-southeast-1 ami-06e5b9d16ead6c55c ap-southeast-2 ami-0ccd429129fbf6bc0 ap-southeast-3 ami-096f9b4d41116d95c ap-southeast-4 ami-0b511ab55f9f7d052 ca-central-1 ami-0e357287c651be1f2 ca-west-1 ami-0b1692e5776740901 eu-central-1 ami-08b13fb388ba5610d eu-central-2 ami-04777298b55045ea9 eu-north-1 ami-05421993a4ee567b9 eu-south-1 ami-00959341869d34746 eu-south-2 ami-0a745b610522a87cc eu-west-1 ami-0e48f85ec57bd7baa eu-west-2 ami-0c015b1387acddc5e eu-west-3 ami-01e466af77a95b93d il-central-1 ami-0a1b706793b54d087 me-central-1 ami-09d58e8b2099ae5bc me-south-1 ami-0f9c259bc4346599c sa-east-1 ami-06277517d966881a3 us-east-1 ami-05483066c3caaccf5 us-east-2 ami-0c704ce516493a479 us-gov-east-1 ami-0695b2cbdd1ec4d48 us-gov-west-1 ami-004404a0eaa427b39 us-west-1 ami-0aaa56637063be772 us-west-2 ami-0870c7a3d5ef4d2b3 Table 4.6. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-0d96d447d3df14f4e ap-east-1 ami-010236402dfba1717 ap-northeast-1 ami-08feeb6d9068bb12e ap-northeast-2 ami-0772082c119147205 ap-northeast-3 ami-05bcbb13493d0516f ap-south-1 ami-0034a539e8adcb817 ap-south-2 ami-0fc56b7c53c38ff70 ap-southeast-1 ami-0b50a0e92a58f3ad8 ap-southeast-2 ami-0697a9174ae206182 ap-southeast-3 ami-05ae309319a7ad504 ap-southeast-4 ami-00500414843baa657 ca-central-1 ami-05e76bf99d3b0d4c8 ca-west-1 ami-099fc953c221ec590 eu-central-1 ami-0d2e2698884020b71 eu-central-2 ami-0a96ba21ddee01719 eu-north-1 ami-0ab8a1070d0b1d9a5 eu-south-1 ami-0d0465299a8b196c1 eu-south-2 ami-07d5aa3c6d468c738 eu-west-1 ami-08e7d209f26c09c80 eu-west-2 ami-0c8336d4e2c1f13cb eu-west-3 ami-085d804b98acf0648 il-central-1 ami-02ce030e36aeb7643 me-central-1 ami-0dea3ac466040b77f me-south-1 ami-02ef2a46887b9786d sa-east-1 ami-0d19817d31b2399f9 us-east-1 ami-04859c888566a2c2f us-east-2 ami-02f7e4a5dc66361bb us-gov-east-1 ami-067ef782980ea96ad us-gov-west-1 ami-0ce67540c1bb2319d us-west-1 ami-0a09c5d2f287718a3 us-west-2 ami-06b94e64e595f6cee 4.4.10. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 4.4.10.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 4.27. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 4.4.11. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 4.4.11.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 4.28. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] 4.4.12. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 4.4.12.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 4.29. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp 4.4.13. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. 4.4.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.4.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. 4.4.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.4.15.2. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.4.15.2.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 4.4.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.4.16. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 4.4.17. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 4.4.18. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Register your cluster on the Cluster registration page. 4.4.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.4.20. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. Additional resources See About remote health monitoring for more information about the Telemetry service 4.4.21. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 4.4.22. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster If necessary, you can remove cloud provider credentials .
[ "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.11\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.11\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'", "ami-0af1d3b7fa5be2131", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.11\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.11\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'", "ami-0af1d3b7fa5be2131", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_aws/user-provisioned-infrastructure
OperatorHub APIs
OperatorHub APIs OpenShift Container Platform 4.18 Reference guide for OperatorHub APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/operatorhub_apis/index
Chapter 12. Performing basic overcloud administration tasks
Chapter 12. Performing basic overcloud administration tasks This chapter contains information about basic tasks you might need to perform during the lifecycle of your overcloud. 12.1. Accessing overcloud nodes through SSH You can access each overcloud node through the SSH protocol. Each overcloud node contains a heat-admin user. The stack user on the undercloud has key-based SSH access to the heat-admin user on each overcloud node. All overcloud nodes have a short hostname that the undercloud resolves to an IP address on the control plane network. Each short hostname uses a .ctlplane suffix. For example, the short name for overcloud-controller-0 is overcloud-controller-0.ctlplane Prerequisites A deployed overcloud with a working control plane network. Procedure Log in to the undercloud as the stack user. Source the overcloudrc file: Find the name of the node that you want to access: Connect to the node as the heat-admin user and use the short hostname of the node: 12.2. Managing containerized services Red Hat OpenStack Platform (RHOSP) runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common commands you can run on a node to manage containerized services. Listing containers and images To list running containers, run the following command: To include stopped or failed containers in the command output, add the --all option to the command: To list container images, run the following command: Inspecting container properties To view the properties of a container or container images, use the podman inspect command. For example, to inspect the keystone container, run the following command: Managing containers with Systemd services versions of OpenStack Platform managed containers with Docker and its daemon. In OpenStack Platform 16, the Systemd services interface manages the lifecycle of the containers. Each container is a service and you run Systemd commands to perform specific operations for each container. Note It is not recommended to use the Podman CLI to stop, start, and restart containers because Systemd applies a restart policy. Use Systemd service commands instead. To check a container status, run the systemctl status command: To stop a container, run the systemctl stop command: To start a container, run the systemctl start command: To restart a container, run the systemctl restart command: Because no daemon monitors the containers status, Systemd automatically restarts most containers in these situations: Clean exit code or signal, such as running podman stop command. Unclean exit code, such as the podman container crashing after a start. Unclean signals. Timeout if the container takes more than 1m 30s to start. For more information about Systemd services, see the systemd.service documentation . Note Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the local file system of the node in /var/lib/config-data/puppet-generated/ . For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the local file system of the node, which overwrites any the changes that were made within the container before the restart. Monitoring podman containers with Systemd timers The Systemd timers interface manages container health checks. Each container has a timer that runs a service unit that executes health check scripts. To list all OpenStack Platform containers timers, run the systemctl list-timers command and limit the output to lines containing tripleo : To check the status of a specific container timer, run the systemctl status command for the healthcheck service: To stop, start, restart, and show the status of a container timer, run the relevant systemctl command against the .timer Systemd resource. For example, to check the status of the tripleo_keystone_healthcheck.timer resource, run the following command: If the healthcheck service is disabled but the timer for that service is present and enabled, it means that the check is currently timed out, but will be run according to timer. You can also start the check manually. Note The podman ps command does not show the container health status. Checking container logs OpenStack Platform 16 introduces a new logging directory /var/log/containers/stdout that contains the standard output (stdout) all of the containers, and standard errors (stderr) consolidated in one single file for each container. Paunch and the container-puppet.py script configure podman containers to push their outputs to the /var/log/containers/stdout directory, which creates a collection of all logs, even for the deleted containers, such as container-puppet-* containers. The host also applies log rotation to this directory, which prevents huge files and disk space issues. In case a container is replaced, the new container outputs to the same log file, because podman uses the container name instead of container ID. You can also check the logs for a containerized service with the podman logs command. For example, to view the logs for the keystone container, run the following command: Accessing containers To enter the shell for a containerized service, use the podman exec command to launch /bin/bash . For example, to enter the shell for the keystone container, run the following command: To enter the shell for the keystone container as the root user, run the following command: To exit the container, run the following command: 12.3. Modifying the overcloud environment You can modify the overcloud to add additional features or alter existing operations. To modify the overcloud, make modifications to your custom environment files and heat templates, then rerun the openstack overcloud deploy command from your initial overcloud creation. For example, if you created an overcloud using Section 7.14, "Deployment command" , rerun the following command: Director checks the overcloud stack in heat, and then updates each item in the stack with the environment files and heat templates. Director does not recreate the overcloud, but rather changes the existing overcloud. Important Removing parameters from custom environment files does not revert the parameter value to the default configuration. You must identify the default value from the core heat template collection in /usr/share/openstack-tripleo-heat-templates and set the value in your custom environment file manually. If you want to include a new environment file, add it to the openstack overcloud deploy command with the`-e` option. For example: This command includes the new parameters and resources from the environment file into the stack. Important It is not advisable to make manual modifications to the overcloud configuration because director might overwrite these modifications later. 12.4. Importing virtual machines into the overcloud You can migrate virtual machines from an existing OpenStack environment to your Red Hat OpenStack Platform (RHOSP) environment. Procedure On the existing OpenStack environment, create a new image by taking a snapshot of a running server and download the image: Copy the exported image to the undercloud node: Log in to the undercloud as the stack user. Source the overcloudrc file: Upload the exported image into the overcloud: Launch a new instance: Important These commands copy each virtual machine disk from the existing OpenStack environment to the new Red Hat OpenStack Platform. QCOW snapshots lose their original layering system. This process migrates all instances from a Compute node. You can now perform maintenance on the node without any instance downtime. To return the Compute node to an enabled state, run the following command: 12.5. Running the dynamic inventory script Director can run Ansible-based automation in your Red Hat OpenStack Platform (RHOSP) environment. Director uses the tripleo-ansible-inventory command to generate a dynamic inventory of nodes in your environment. Procedure To view a dynamic inventory of nodes, run the tripleo-ansible-inventory command after sourcing stackrc : Use the --list option to return details about all hosts. This command outputs the dynamic inventory in a JSON format: To execute Ansible playbooks on your environment, run the ansible command and include the full path of the dynamic inventory tool using the -i option. For example: Replace [HOSTS] with the type of hosts that you want to use to use: controller for all Controller nodes compute for all Compute nodes overcloud for all overcloud child nodes. For example, controller and compute nodes undercloud for the undercloud "*" for all nodes Replace [OTHER OPTIONS] with additional Ansible options. Use the --ssh-extra-args='-o StrictHostKeyChecking=no' option to bypass confirmation on host key checking. Use the -u [USER] option to change the SSH user that executes the Ansible automation. The default SSH user for the overcloud is automatically defined using the ansible_ssh_user parameter in the dynamic inventory. The -u option overrides this parameter. Use the -m [MODULE] option to use a specific Ansible module. The default is command , which executes Linux commands. Use the -a [MODULE_ARGS] option to define arguments for the chosen module. Important Custom Ansible automation on the overcloud is not part of the standard overcloud stack. Subsequent execution of the openstack overcloud deploy command might override Ansible-based configuration for OpenStack Platform services on overcloud nodes. 12.6. Removing the overcloud To remove the overcloud, run the openstack overcloud delete command. Procedure Delete an existing overcloud: Confirm that the overcloud is no longer present in the output of the openstack stack list command: Deletion takes a few minutes. When the deletion completes, follow the standard steps in the deployment scenarios to recreate your overcloud.
[ "source ~/stackrc", "(undercloud) USD openstack server list", "(undercloud) USD ssh [email protected]", "sudo podman ps", "sudo podman ps --all", "sudo podman images", "sudo podman inspect keystone", "sudo systemctl status tripleo_keystone ● tripleo_keystone.service - keystone container Loaded: loaded (/etc/systemd/system/tripleo_keystone.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2019-02-15 23:53:18 UTC; 2 days ago Main PID: 29012 (podman) CGroup: /system.slice/tripleo_keystone.service └─29012 /usr/bin/podman start -a keystone", "sudo systemctl stop tripleo_keystone", "sudo systemctl start tripleo_keystone", "sudo systemctl restart tripleo_keystone", "sudo systemctl list-timers | grep tripleo Mon 2019-02-18 20:18:30 UTC 1s left Mon 2019-02-18 20:17:26 UTC 1min 2s ago tripleo_nova_metadata_healthcheck.timer tripleo_nova_metadata_healthcheck.service Mon 2019-02-18 20:18:33 UTC 4s left Mon 2019-02-18 20:17:03 UTC 1min 25s ago tripleo_mistral_engine_healthcheck.timer tripleo_mistral_engine_healthcheck.service Mon 2019-02-18 20:18:34 UTC 5s left Mon 2019-02-18 20:17:23 UTC 1min 5s ago tripleo_keystone_healthcheck.timer tripleo_keystone_healthcheck.service Mon 2019-02-18 20:18:35 UTC 6s left Mon 2019-02-18 20:17:13 UTC 1min 15s ago tripleo_memcached_healthcheck.timer tripleo_memcached_healthcheck.service (...)", "sudo systemctl status tripleo_keystone_healthcheck.service ● tripleo_keystone_healthcheck.service - keystone healthcheck Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.service; disabled; vendor preset: disabled) Active: inactive (dead) since Mon 2019-02-18 20:22:46 UTC; 22s ago Process: 115581 ExecStart=/usr/bin/podman exec keystone /openstack/healthcheck (code=exited, status=0/SUCCESS) Main PID: 115581 (code=exited, status=0/SUCCESS) Feb 18 20:22:46 undercloud.localdomain systemd[1]: Starting keystone healthcheck Feb 18 20:22:46 undercloud.localdomain podman[115581]: {\"versions\": {\"values\": [{\"status\": \"stable\", \"updated\": \"2019-01-22T00:00:00Z\", \"...\"}]}]}} Feb 18 20:22:46 undercloud.localdomain podman[115581]: 300 192.168.24.1:35357 0.012 seconds Feb 18 20:22:46 undercloud.localdomain systemd[1]: Started keystone healthcheck.", "sudo systemctl status tripleo_keystone_healthcheck.timer ● tripleo_keystone_healthcheck.timer - keystone container healthcheck Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.timer; enabled; vendor preset: disabled) Active: active (waiting) since Fri 2019-02-15 23:53:18 UTC; 2 days ago", "sudo podman logs keystone", "sudo podman exec -it keystone /bin/bash", "sudo podman exec --user 0 -it <NAME OR ID> /bin/bash", "exit", "source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e ~/templates/node-info.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --ntp-server pool.ntp.org", "source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e ~/templates/new-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml -e ~/templates/node-info.yaml --ntp-server pool.ntp.org", "openstack server image create instance_name --name image_name openstack image save image_name --file exported_vm.qcow2", "scp exported_vm.qcow2 [email protected]:~/.", "source ~/overcloudrc", "(overcloud) USD openstack image create imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare", "(overcloud) USD openstack server create imported_instance --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id", "source ~/overcloudrc (overcloud) USD openstack compute service set [hostname] nova-compute --enable", "source ~/stackrc (undercloud) USD tripleo-ansible-inventory --list", "{\"overcloud\": {\"children\": [\"controller\", \"compute\"], \"vars\": {\"ansible_ssh_user\": \"heat-admin\"}}, \"controller\": [\"192.168.24.2\"], \"undercloud\": {\"hosts\": [\"localhost\"], \"vars\": {\"overcloud_horizon_url\": \"http://192.168.24.4:80/dashboard\", \"overcloud_admin_password\": \"abcdefghijklm12345678\", \"ansible_connection\": \"local\"}}, \"compute\": [\"192.168.24.3\"]}", "(undercloud) USD ansible [HOSTS] -i /bin/tripleo-ansible-inventory [OTHER OPTIONS]", "source ~/stackrc (undercloud) USD openstack overcloud delete overcloud", "(undercloud) USD openstack stack list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_performing-basic-overcloud-administration-tasks
Chapter 10. Using Streams for Apache Kafka with Kafka Connect
Chapter 10. Using Streams for Apache Kafka with Kafka Connect Use Kafka Connect to stream data between Kafka and external systems. Kafka Connect provides a framework for moving large amounts of data while maintaining scalability and reliability. Kafka Connect is typically used to integrate Kafka with database, storage, and messaging systems that are external to your Kafka cluster. Kafka Connect runs in standalone or distributed modes. Standalone mode In standalone mode, Kafka Connect runs on a single node. Standalone mode is intended for development and testing. Distributed mode In distributed mode, Kafka Connect runs across one or more worker nodes and the workloads are distributed among them. Distributed mode is intended for production. Kafka Connect uses connector plugins that implement connectivity for different types of external systems. There are two types of connector plugins: sink and source. Sink connectors stream data from Kafka to external systems. Source connectors stream data from external systems into Kafka. You can also use the Kafka Connect REST API to create, manage, and monitor connector instances. Connector configuration specifies details such as the source or sink connectors and the Kafka topics to read from or write to. How you manage the configuration depends on whether you are running Kafka Connect in standalone or distributed mode. In standalone mode, you can provide the connector configuration as JSON through the Kafka Connect REST API or you can use properties files to define the configuration. In distributed mode, you can only provide the connector configuration as JSON through the Kafka Connect REST API. Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . 10.1. Using Kafka Connect in standalone mode In Kafka Connect standalone mode, connectors run on the same node as the Kafka Connect worker process, which runs as a single process in a single JVM. This means that the worker process and connectors share the same resources, such as CPU, memory, and disk. 10.1.1. Configuring Kafka Connect in standalone mode To configure Kafka Connect in standalone mode, edit the config/connect-standalone.properties configuration file. The following options are the most important. bootstrap.servers A list of Kafka broker addresses used as bootstrap connections to Kafka. For example, kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 . key.converter The class used to convert message keys to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . value.converter The class used to convert message payloads to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . offset.storage.file.filename Specifies the file in which the offset data is stored. Connector plugins open client connections to the Kafka brokers using the bootstrap address. To configure these connections, use the standard Kafka producer and consumer configuration options prefixed by producer. or consumer. . 10.1.2. Running Kafka Connect in standalone mode Configure and run Kafka Connect in standalone mode. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. You have specified connector configuration in properties files. You can also use the Kafka Connect REST API to manage connectors . Procedure Edit the ./config/connect-standalone.properties Kafka Connect configuration file and set bootstrap.server to point to your Kafka brokers. For example: bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 Start Kafka Connect with the configuration file and specify one or more connector configurations. ./bin/connect-standalone.sh ./config/connect-standalone.properties connector1.properties [connector2.properties ...] Verify that Kafka Connect is running. jcmd | grep ConnectStandalone 10.2. Using Kafka Connect in distributed mode In distributed mode, Kafka Connect runs as a cluster of worker processes, with each worker running on a separate node. Connectors can run on any worker in the cluster, allowing for greater scalability and fault tolerance. The connectors are managed by the workers, which coordinate with each other to distribute the work and ensure that each connector is running on a single node at any given time. 10.2.1. Configuring Kafka Connect in distributed mode To configure Kafka Connect in distributed mode, edit the config/connect-distributed.properties configuration file. The following options are the most important. bootstrap.servers A list of Kafka broker addresses used as bootstrap connections to Kafka. For example, kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 . key.converter The class used to convert message keys to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . value.converter The class used to convert message payloads to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . group.id The name of the distributed Kafka Connect cluster. This must be unique and must not conflict with another consumer group ID. The default value is connect-cluster . config.storage.topic The Kafka topic used to store connector configurations. The default value is connect-configs . offset.storage.topic The Kafka topic used to store offsets. The default value is connect-offset . status.storage.topic The Kafka topic used for worker node statuses. The default value is connect-status . Streams for Apache Kafka includes an example configuration file for Kafka Connect in distributed mode - see config/connect-distributed.properties in the Streams for Apache Kafka installation directory. Connector plugins open client connections to the Kafka brokers using the bootstrap address. To configure these connections, use the standard Kafka producer and consumer configuration options prefixed by producer. or consumer. . 10.2.2. Running Kafka Connect in distributed mode Configure and run Kafka Connect in distributed mode. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Running the cluster Edit the ./config/connect-distributed.properties Kafka Connect configuration file on all Kafka Connect worker nodes. Set the bootstrap.server option to point to your Kafka brokers. Set the group.id option. Set the config.storage.topic option. Set the offset.storage.topic option. Set the status.storage.topic option. For example: bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 group.id=my-group-id config.storage.topic=my-group-id-configs offset.storage.topic=my-group-id-offsets status.storage.topic=my-group-id-status Start the Kafka Connect workers with the ./config/connect-distributed.properties configuration file on all Kafka Connect nodes. ./bin/connect-distributed.sh ./config/connect-distributed.properties Verify that Kafka Connect is running. jcmd | grep ConnectDistributed Use the Kafka Connect REST API to manage connectors . 10.3. Managing connectors The Kafka Connect REST API provides endpoints for creating, updating, and deleting connectors directly. You can also use the API to check the status of connectors or change logging levels. When you create a connector through the API, you provide the configuration details for the connector as part of the API call. You can also add and manage connectors as plugins. Plugins are packaged as JAR files that contain the classes to implement the connectors through the Kafka Connect API. You just need to specify the plugin in the classpath or add it to a plugin path for Kafka Connect to run the connector plugin on startup. In addition to using the Kafka Connect REST API or plugins to manage connectors, you can also add connector configuration using properties files when running Kafka Connect in standalone mode. To do this, you simply specify the location of the properties file when starting the Kafka Connect worker process. The properties file should contain the configuration details for the connector, including the connector class, source and destination topics, and any required authentication or serialization settings. 10.3.1. Limiting access to the Kafka Connect API The Kafka Connect REST API can be accessed by anyone who has authenticated access and knows the endpoint URL, which includes the hostname/IP address and port number. It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. For improved security, we recommend configuring the following properties for the Kafka Connect API: (Kafka 3.4 or later) org.apache.kafka.disallowed.login.modules to specifically exclude insecure login modules connector.client.config.override.policy set to NONE to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses 10.3.2. Configuring connectors Use the Kafka Connect REST API or properties files to create, manage, and monitor connector instances. You can use the REST API when using Kafka Connect in standalone or distributed mode. You can use properties files when using Kafka Connect in standalone mode. 10.3.2.1. Using the Kafka Connect REST API to manage connectors When using the Kafka Connect REST API, you can create connectors dynamically by sending PUT or POST HTTP requests to the Kafka Connect REST API, specifying the connector configuration details in the request body. Tip When you use the PUT command, it's the same command for starting and updating connectors. The REST interface listens on port 8083 by default and supports the following endpoints: GET /connectors Return a list of existing connectors. POST /connectors Create a connector. The request body has to be a JSON object with the connector configuration. GET /connectors/<connector_name> Get information about a specific connector. GET /connectors/<connector_name>/config Get configuration of a specific connector. PUT /connectors/<connector_name>/config Update the configuration of a specific connector. GET /connectors/<connector_name>/status Get the status of a specific connector. GET /connectors/<connector_name>/tasks Get a list of tasks for a specific connector GET /connectors/<connector_name>/tasks/ <task_id> /status Get the status of a task for a specific connector PUT /connectors/<connector_name>/pause Pause the connector and all its tasks. The connector will stop processing any messages. PUT /connectors/<connector_name>/stop Stop the connector and all its tasks. The connector will stop processing any messages. Stopping a connector from running may be more suitable for longer durations than just pausing. PUT /connectors/<connector_name>/resume Resume a paused connector. POST /connectors/<connector_name>/restart Restart a connector in case it has failed. POST /connectors/<connector_name>/tasks/ <task_id> /restart Restart a specific task. DELETE /connectors/<connector_name> Delete a connector. GET /connectors/<connector_name>/topics Get the topics for a specific connector. PUT /connectors/<connector_name>/topics/reset Empty the set of active topics for a specific connector. GET /connectors/<connector_name>/offsets Get the current offsets for a connector. DELETE /connectors/<connector_name>/offsets Reset the offsets for a connector, which must be in a stopped state. PATCH /connectors/<connector_name>/offsets Adjust the offsets (using an offset property in the request) for a connector, which must be in a stopped state. GET /connector-plugins Get a list of all supported connector plugins. GET /connector-plugins/<connector_plugin_type>/config Get the configuration for a connector plugin. PUT /connector-plugins/<connector_type>/config/validate Validate connector configuration. 10.3.2.2. Specifying connector configuration properties To configure a Kafka Connect connector, you need to specify the configuration details for source or sink connectors. There are two ways to do this: through the Kafka Connect REST API, using JSON to provide the configuration, or by using properties files to define the configuration properties. The specific configuration options available for each type of connector may differ, but both methods provide a flexible way to specify the necessary settings. The following options apply to all connectors: name The name of the connector, which must be unique within the current Kafka Connect instance. connector.class The class of the connector plug-in. For example, org.apache.kafka.connect.file.FileStreamSinkConnector . tasks.max The maximum number of tasks that the specified connector can use. Tasks enable the connector to perform work in parallel. The connector might create fewer tasks than specified. key.converter The class used to convert message keys to and from Kafka format. This overrides the default value set by the Kafka Connect configuration. For example, org.apache.kafka.connect.json.JsonConverter . value.converter The class used to convert message payloads to and from Kafka format. This overrides the default value set by the Kafka Connect configuration. For example, org.apache.kafka.connect.json.JsonConverter . You must set at least one of the following options for sink connectors: topics A comma-separated list of topics used as input. topics.regex A Java regular expression of topics used as input. For all other options, see the connector properties in the Apache Kafka documentation . Note Streams for Apache Kafka includes the example connector configuration files config/connect-file-sink.properties and config/connect-file-source.properties in the Streams for Apache Kafka installation directory. Additional resources Kafka Connect REST API OpenAPI documentation 10.3.3. Creating connectors using the Kafka Connect API Use the Kafka Connect REST API to create a connector to use with Kafka Connect. Prerequisites A Kafka Connect installation. Procedure Prepare a JSON payload with the connector configuration. For example: { "name": "my-connector", "config": { "connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector", "tasks.max": "1", "topics": "my-topic-1,my-topic-2", "file": "/tmp/output-file.txt" } } Send a POST request to <KafkaConnectAddress> :8083/connectors to create the connector. The following example uses curl : curl -X POST -H "Content-Type: application/json" --data @sink-connector.json http://connect0.my-domain.com:8083/connectors Verify that the connector was deployed by sending a GET request to <KafkaConnectAddress> :8083/connectors . The following example uses curl : curl http://connect0.my-domain.com:8083/connectors 10.3.4. Deleting connectors using the Kafka Connect API Use the Kafka Connect REST API to delete a connector from Kafka Connect. Prerequisites A Kafka Connect installation. Deleting connectors Verify that the connector exists by sending a GET request to <KafkaConnectAddress> :8083/connectors/ <ConnectorName> . The following example uses curl : curl http://connect0.my-domain.com:8083/connectors To delete the connector, send a DELETE request to <KafkaConnectAddress> :8083/connectors . The following example uses curl : curl -X DELETE http://connect0.my-domain.com:8083/connectors/my-connector Verify that the connector was deleted by sending a GET request to <KafkaConnectAddress> :8083/connectors . The following example uses curl : curl http://connect0.my-domain.com:8083/connectors 10.3.5. Adding connector plugins Kafka provides example connectors to use as a starting point for developing connectors. The following example connectors are included with Streams for Apache Kafka: FileStreamSink Reads data from Kafka topics and writes the data to a file. FileStreamSource Reads data from a file and sends the data to Kafka topics. Both connectors are contained in the libs/connect-file-<kafka_version>.redhat-<build>.jar plugin. To use the connector plugins in Kafka Connect, you can add them to the classpath or specify a plugin path in the Kafka Connect properties file and copy the plugins to the location. Specifying the example connectors in the classpath CLASSPATH=/opt/kafka/libs/connect-file-<kafka_version>.redhat-<build>.jar opt/kafka/bin/connect-distributed.sh Setting a plugin path plugin.path=/opt/kafka/connector-plugins,/opt/connectors The plugin.path configuration option can contain a comma-separated list of paths. You can add more connector plugins if needed. Kafka Connect searches for and runs connector plugins at startup. Note When running Kafka Connect in distributed mode, plugins must be made available on all worker nodes.
[ "bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092", "./bin/connect-standalone.sh ./config/connect-standalone.properties connector1.properties [connector2.properties ...]", "jcmd | grep ConnectStandalone", "bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 group.id=my-group-id config.storage.topic=my-group-id-configs offset.storage.topic=my-group-id-offsets status.storage.topic=my-group-id-status", "./bin/connect-distributed.sh ./config/connect-distributed.properties", "jcmd | grep ConnectDistributed", "{ \"name\": \"my-connector\", \"config\": { \"connector.class\": \"org.apache.kafka.connect.file.FileStreamSinkConnector\", \"tasks.max\": \"1\", \"topics\": \"my-topic-1,my-topic-2\", \"file\": \"/tmp/output-file.txt\" } }", "curl -X POST -H \"Content-Type: application/json\" --data @sink-connector.json http://connect0.my-domain.com:8083/connectors", "curl http://connect0.my-domain.com:8083/connectors", "curl http://connect0.my-domain.com:8083/connectors", "curl -X DELETE http://connect0.my-domain.com:8083/connectors/my-connector", "curl http://connect0.my-domain.com:8083/connectors", "CLASSPATH=/opt/kafka/libs/connect-file-<kafka_version>.redhat-<build>.jar opt/kafka/bin/connect-distributed.sh", "plugin.path=/opt/kafka/connector-plugins,/opt/connectors" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-kafka-connect-str
Chapter 19. message
Chapter 19. message The original log entry text, UTF-8 encoded. This field may be absent or empty if a non-empty structured field is present. See the description of structured for more. Data type text Example value HAPPY
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/message
Images
Images OpenShift Container Platform 4.7 Creating and managing images and imagestreams in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "registry.redhat.io", "docker.io/openshift/jenkins-2-centos7", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324", "apiVersion: samples.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: architectures: - x86_64 managementState: Removed", "oc edit configs.samples.operator.openshift.io/cluster -o yaml", "apiVersion: samples.operator.openshift.io/v1 kind: Config", "oc tag -d <image_stream_name:tag>", "Deleted tag default/<image_stream_name:tag>.", "tar xvzf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y", "RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y", "FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile", "FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y", "RUN chgrp -R 0 /some/directory && chmod -R g=u /some/directory", "LABEL io.openshift.tags mongodb,mongodb24,nosql", "LABEL io.openshift.wants mongodb,redis", "LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support", "LABEL io.openshift.non-scalable true", "LABEL io.openshift.min-memory 16Gi LABEL io.openshift.min-cpu 4", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "s2i create _<image name>_ _<destination directory>_", "IMAGE_NAME = openshift/ruby-20-centos7 CONTAINER_ENGINE := USD(shell command -v podman 2> /dev/null | echo docker) build: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME) . .PHONY: test test: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME)-candidate . IMAGE_NAME=USD(IMAGE_NAME)-candidate test/run", "podman build -t <builder_image_name>", "docker build -t <builder_image_name>", "podman run <builder_image_name> .", "docker run <builder_image_name> .", "s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_", "podman run <output_application_image_name>", "docker run <output_application_image_name>", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "oc tag <source> <destination>", "oc tag ruby:2.0 ruby:static-2.0", "oc tag --alias=true <source> <destination>", "oc delete istag/ruby:latest", "oc tag -d ruby:latest", "<image_stream_name>:<tag>", "<image_stream_name>@<id>", "openshift/ruby-20-centos7:2.0", "registry.redhat.io/rhel7:latest", "centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "oc policy add-role-to-user system:image-puller system:serviceaccount:project-a:default --namespace=project-b", "oc policy add-role-to-group system:image-puller system:serviceaccounts:project-a --namespace=project-b", "oc create secret generic <pull_secret_name> --from-file=.dockercfg=<path/to/.dockercfg> --type=kubernetes.io/dockercfg", "oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "oc secrets link default <pull_secret_name> --for=pull", "oc create secret docker-registry --docker-server=sso.redhat.com [email protected] --docker-password=******** --docker-email=unused redhat-connect-sso secret/redhat-connect-sso", "oc create secret docker-registry --docker-server=privateregistry.example.com [email protected] --docker-password=******** --docker-email=unused private-registry secret/private-registry", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5", "<image-stream-name>@<image-id>", "origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d", "tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest", "<imagestream name>:<tag>", "origin-ruby-sample:latest", "apiVersion: image.openshift.io/v1 kind: ImageStreamMapping metadata: creationTimestamp: null name: origin-ruby-sample namespace: test tag: latest image: dockerImageLayers: - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29 size: 196634330 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092 size: 177723024 - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e size: 55679776 - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966 size: 11939149 dockerImageMetadata: Architecture: amd64 Config: Cmd: - /usr/libexec/s2i/run Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Labels: build-date: 2015-12-23 io.k8s.description: Platform for building and running Ruby 2.2 applications io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest io.openshift.build.commit.author: Ben Parees <[email protected]> io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500 io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442 io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti' io.openshift.build.commit.ref: master io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git io.openshift.builder-base-version: 8d95148 io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df io.openshift.s2i.scripts-url: image:///usr/libexec/s2i io.openshift.tags: builder,ruby,ruby22 io.s2i.scripts-url: image:///usr/libexec/s2i license: GPLv2 name: CentOS Base Image vendor: CentOS User: \"1001\" WorkingDir: /opt/app-root/src Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76 ContainerConfig: AttachStdout: true Cmd: - /bin/sh - -c - tar -C /tmp -xf - && /usr/libexec/s2i/assemble Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Hostname: ruby-sample-build-1-build Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e OpenStdin: true StdinOnce: true User: \"1001\" WorkingDir: /opt/app-root/src Created: 2016-01-29T13:40:00Z DockerVersion: 1.8.2.fc21 Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43 Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd Size: 441976279 apiVersion: \"1.0\" kind: DockerImage dockerImageMetadataVersion: \"1.0\" dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d", "oc describe is/<image-name>", "oc describe is/python", "Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago", "oc describe istag/<image-stream>:<tag-name>", "oc describe istag/python:latest", "Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801", "oc tag <image-name:tag1> <image-name:tag2>", "oc tag python:3.5 python:latest", "Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25.", "oc describe is/python", "Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago", "oc tag <repository/image> <image-name:tag>", "oc tag docker.io/python:3.6.0 python:3.6", "Tag python:3.6 set to docker.io/python:3.6.0.", "oc tag <image-name:tag> <image-name:latest>", "oc tag python:3.6 python:latest", "Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f.", "oc tag -d <image-name:tag>", "oc tag -d python:3.5", "Deleted tag default/python:3.5.", "oc tag <repository/image> <image-name:tag> --scheduled", "oc tag docker.io/python:3.6.0 python:3.6 --scheduled", "Tag python:3.6 set to import docker.io/python:3.6.0 periodically.", "oc tag <repositiory/image> <image-name:tag>", "oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson", "oc import-image <imagestreamtag> --from= <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "oc new-app jenkins-persistent", "oc new-app jenkins-ephemeral", "kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange", "docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0>", "docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0>", "docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0>", "docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0>", "docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0>", "podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }", "podman inspect --format='{{ index .Config.Labels \"io.openshift.s2i.scripts-url\" }}' wildfly/wildfly-centos7", "image:///usr/libexec/s2i", "#!/bin/bash echo \"Before assembling\" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo \"After successful assembling\" else echo \"After failed assembling\" fi exit USDrc", "#!/bin/bash echo \"Before running application\" exec /usr/libexec/s2i/run" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/images/index
Chapter 3. GNBD Driver and Command Usage
Chapter 3. GNBD Driver and Command Usage The Global Network Block Device (GNBD) driver allows a node to export its local storage as a GNBD over a network so that other nodes on the network can share the storage. Client nodes importing the GNBD use it like any other block device. Importing a GNBD on multiple clients forms a shared storage configuration through which GFS can be used. The GNBD driver is implemented through the following components. gnbd_serv - Implements the GNBD server. It is a user-space daemon that allows a node to export local storage over a network. gnbd.ko - Implements the GNBD device driver on GNBD clients (nodes using GNBD devices). Two user commands are available to configure GNBD: gnbd_export (for servers) - User program for creating, exporting, and managing GNBDs on a GNBD server. gnbd_import (for clients) - User program for importing and managing GNBDs on a GNBD client. 3.1. Exporting a GNBD from a Server The gnbd_serv daemon must be running on a node before it can export storage as a GNBD. You can start the gnbd_serv daemon running gnbd_serv as follows: Once local storage has been identified to be exported, the gnbd_export command is used to export it. Warning When you configure GNBD servers with device-mapper multipath, you must not use page caching. All GNBDs that are part of a logical volume must run with caching disabled. By default, the gnbd_export command exports with caching turned off. Note A server should not import the GNBDs to use them as a client would. If a server exports the devices uncached, the underlying devices may also be used by gfs . Usage pathname Specifies a storage device to export. gnbdname Specifies an arbitrary name selected for the GNBD. It is used as the device name on GNBD clients. This name must be unique among all GNBDs exported in a network. -o Export the device as read-only. -c Enable caching. Reads from the exported GNBD and takes advantage of the Linux page cache. By default, the gnbd_export command does not enable caching. Warning When you configure GNBD servers with device-mapper multipath, do not specify the -c option, as this lead sto data corruption. All GNBDs that are part of a logical volume must run with caching disabled. Note If you have been using GFS 5.2 or earlier and do not want to change your GNBD setup you should specify the -c option. Before GFS Release 5.2.1, Linux caching was enabled by default for gnbd_export . If the -c option is not specified, GNBD runs with a noticeable performance decrease. Also, if the -c option is not specified, the exported GNBD runs in timeout mode, using the default timeout value (the -t option). For more information about the gnbd_export command and its options, refer to the gnbd_export man page. -u uid Manually sets the Universal Identifier for an exported device. This option is used with -e . The UID is used by device-mapper multipath to determine which devices belong in a multipath map. A device must have a UID to be multipathed. However, for most SCSI devices the default Get UID command, /usr/sbin/gnbd_get_uid , will return an appropriate value. Note The UID refers to the device being exported, not the GNBD itself. The UIDs of two GNBD devices should be equal, only if they are exporting the same underlying device. This means that both GNBD servers are connected to the same physical device. Warning This option should only be used for exporting shared storage devices, when the -U command option does not work. This should almost never happen for SCSI devices. If two GNBD devices are not exporting the same underlying device, but are given the same UID, data corruption will occur. -U Command Gets the UID command. The UID command is a command the gnbd_export command will run to get a Universal Identifier for the exported device. The UID is necessary to use device-mapper multipath with GNBD. The command must use the full path of any executeable that you wish to run. A command can contain the %M, %m or %n escape sequences. %M will be expanded to the major number of the exported device, %m will be expaned to the minor number of the exported device, and %n will be expanded to the sysfs name for the device. If no command is given, GNBD will use the default command /usr/sbin/gnbd_get_uid . This command will work for most SCSI devices. Examples This example is for a GNBD server configured with GNBD multipath. It exports device /dev/sdc2 as GNBD gamma . Cache is disabled by default. This example is for a GNBD server not configured with GNBD multipath. It exports device /dev/sdb2 as GNBD delta with cache enabled. This example exports device /dev/sdb2 as GNBD delta with cache enabled.
[ "gnbd_serv gnbd_serv: startup succeeded", "gnbd_export -d pathname -e gnbdname [ -c ][ -u ][ -U", "gnbd_export -d /dev/sdc2 -e gamma -U", "gnbd_export -d /dev/sdb1 -e delta -c", "gnbd_export -d /dev/sdb2 -e delta -c" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_network_block_device/ch-driver-command
5.2.4. Create the bootable media
5.2.4. Create the bootable media P2V uses bootable media to create a bootable image of the hard drive of a physical machine and send it to the conversion server for import into a hypervisor. You will need a Red Hat subscription to download the rhel-6.x-p2v.iso ISO. Follow the instructions on preparing bootable media from the Red Hat Enterprise Linux Installation Guide . Note that there is only one ISO image for both i386 and x86_64 architectures. The latest release of rhel-6.x-p2v.iso can be found at https://rhn.redhat.com/rhn/channels/PackageList.do?filter_string=virt-p2v&cid=10508 . The ISO file will be installed in /usr/share/virt-p2v/ . Create the appropriate bootable media: The rhel-6.x-p2v.iso file can be used three ways: as a bootable disk, a PXE boot image, and as a bootable USB device. Burn the ISO to a blank CD-ROM or DVD-ROM, and insert it into the disk drive of the physical machine that is to be converted. To boot properly from this boot media, some changes to BIOS settings may be required to ensure that the optical disk drive is first in the boot order. Use the ISO to create a bootable USB media. To boot properly from this boot media, some changes to BIOS settings may be required to ensure that the USB device is first in the boot order. In addition, some BIOSes do not support booting from USB media. The P2V client disk image is approximately 100 MB, so the USB device must be large enough to hold the disk image. Prepare a PXE Boot image for your existing PXE server. To boot from PXE, some changes to BIOS settings may be required to ensure that the PXE boot is first in the boot order. More information about creating boot media can be found in Appendix A, Additional procedures . You have finished preparing and are now ready to move on to converting a physical machine to a virtual machine.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/preparation_before_the_p2v_migration-download_p2v_iso_from_rhn
Chapter 157. BrokerAndVolumeIds schema reference
Chapter 157. BrokerAndVolumeIds schema reference Used in: KafkaRebalanceSpec Property Property type Description brokerId integer ID of the broker that contains the disk from which you want to move the partition replicas. volumeIds integer array IDs of the disks from which the partition replicas need to be moved.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-BrokerAndVolumeIds-reference
Chapter 4. Fencing Controller nodes with STONITH
Chapter 4. Fencing Controller nodes with STONITH Fencing is the process of isolating a failed node to protect the cluster and the cluster resources. Without fencing, a failed node might result in data corruption in a cluster. Director uses Pacemaker to provide a highly available cluster of Controller nodes. Pacemaker uses a process called STONITH to fence failed nodes. STONITH is an acronym for "Shoot the other node in the head". STONITH is disabled by default and requires manual configuration so that Pacemaker can control the power management of each node in the cluster. If a Controller node fails a health check, the Controller node that acts as the Pacemaker designated coordinator (DC) uses the Pacemaker stonith service to fence the impacted Controller node. Important Deploying a highly available overcloud without STONITH is not supported. You must configure a STONITH device for each node that is a part of the Pacemaker cluster in a highly available overcloud. For more information on STONITH and Pacemaker, see Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High Availability Clusters . 4.1. Supported fencing agents When you deploy a high availability environment with fencing, you can choose the fencing agents based on your environment needs. To change the fencing agent, you must configure additional parameters in the fencing.yaml file. Red Hat OpenStack Platform (RHOSP) supports the following fencing agents: Intelligent Platform Management Interface (IPMI) Default fencing mechanism that Red Hat OpenStack Platform (RHOSP) uses to manage fencing. STONITH Block Device (SBD) The SBD (Storage-Based Death) daemon integrates with Pacemaker and a watchdog device to arrange for nodes to reliably shut down when fencing is triggered and in cases where traditional fencing mechanisms are not available. Important You can only configure SBD fencing on controller nodes, because it is not supported on virtual machines or nodes that use the pacemaker_remote service. fence_sbd and sbd poison-pill fencing with block storage devices are not supported. SBD fencing is only supported with compatible watchdog devices. For more information, see Support Policies for RHEL High Availability Clusters - sbd and fence_sbd . fence_kdump Use in deployments with the kdump crash recovery service. If you choose this agent, ensure that you have enough disk space to store the dump files. You can configure this agent in addition to the IPMI, fence_rhevm , or Redfish fencing agents. If you configure multiple fencing agents, ensure that you allocate enough time for the first agent to complete the task before the second agent starts the task. Implement this fence_kdump STONITH agent as a first level device. For example: Important RHOSP director supports only the configuration of the fence_kdump STONITH agent, and not the configuration of the full kdump service that the fencing agent depends on. For information about configuring the kdump service, see the article How do I configure fence_kdump in a Red Hat Pacemaker cluster . fence_kdump is not supported if the Pacemaker network traffic interface uses the ovs_bridges or ovs_bonds network device. To enable fence_kdump , you must change the network device to linux_bond or linux_bridge . Redfish Use in deployments with servers that support the DMTF Redfish APIs. To specify this agent, change the value of the agent parameter to fence_redfish in the fencing.yaml file. For more information about Redfish, see the DTMF Documentation . Multi-layered fencing You can configure multiple fencing agents to support complex fencing use cases. For example, you can configure IPMI fencing together with fence_kdump . The order of the fencing agents determines the order in which Pacemaker triggers each mechanism. Additional resources Section 4.2, "Deploying fencing on the overcloud" Section 4.3, "Testing fencing on the overcloud" Section 4.5, "Fencing parameters" 4.2. Deploying fencing on the overcloud To deploy fencing on the overcloud, first review the state of STONITH and Pacemaker and configure the fencing.yaml file. Then, deploy the overcloud and configure additional parameters. Finally, test that fencing is deployed correctly on the overcloud. Prerequisites You have chosen the correct fencing agent for your deployment. For the list of supported fencing agents, see Section 4.1, "Supported fencing agents" . You have verified that you can access the nodes.json file that you created when you registered your nodes in director. This file is a required input for the fencing.yaml file that you generate during deployment. This nodes.json file must contain the MAC address of one of the network interfaces (NICs) on the node. For more information, see Registering Nodes for the Overcloud . You have verified that all of the parameters for your fencing agents are correctly specified to ensure that these agents will be successfully created. However, the fencing agents will be created even when their parameters are incorrectly specified. Therefore you must ensure that all the created fencing agents are in the "running" and not the "stopped" state. Procedure Log in to each Controller node as the tripleo-admin user. Verify that the cluster is running: Example output: Verify that STONITH is disabled: Example output: Depending on the fencing agent that you want to use, choose one of the following options: If you use the IPMI or RHV fencing agent, generate the fencing.yaml environment file: Note This command converts ilo and drac power management details to IPMI equivalents. If you use a different fencing agent, such as STONITH Block Device (SBD), fence_kdump , or Redfish, or if you use pre-provisioned nodes, create the fencing.yaml file manually. SBD fencing only: Add the following parameter to the fencing.yaml file: Note This step is applicable to initial overcloud deployments only. For more information about how to enable SBD fencing on an existing overcloud, see Enabling sbd fencing in RHEL 7 and 8 . Multi-layered fencing only: Add the level-specific parameters to the generated fencing.yaml file: Replace <parameter> and <value> with the actual parameters and values that the fencing agent requires. Run the overcloud deploy command and include the fencing.yaml file and any other environment files that are relevant for your deployment: SBD fencing only: Set the watchdog timer device interval and check that the interval is set correctly. Ensure that all of the created fencing agents are in the "running" and not the "stopped" state. Fencing agents in the "stopped" state have been incorrectly configured. Verification Log in to the overcloud as the tripleo-admin user and ensure that Pacemaker is configured as the resource manager: In this example, Pacemaker is configured to use a STONITH resource for each of the Controller nodes that are specified in the fencing.yaml file. Note You must not configure the fence-resource process on the same node that it controls. Check the fencing resource attributes. The STONITH attribute values must match the values in the fencing.yaml file: Additional Resources Section 4.3, "Testing fencing on the overcloud" Section 4.5, "Fencing parameters" Exploring RHEL High Availability's Components - sbd and fence_sbd 4.3. Testing fencing on the overcloud To test that fencing works correctly, trigger fencing by closing all ports on a Controller node and restarting the server. Important This procedure deliberately drops all connections to the Controller node, which causes the node to restart. Prerequisites Fencing is deployed and running on the overcloud. For information on how to deploy fencing, see Section 4.2, "Deploying fencing on the overcloud" . Controller node is available for a restart. Procedure Log in to a Controller node as the stack user and source the credentials file: Change to the root user and close all connections to the Controller node: From a different Controller node, locate the fencing event in the Pacemaker log file: If the STONITH service performed the fencing action on the Controller, the log file shows a fencing event. Wait a few minutes and then verify that the restarted Controller node is running in the cluster again by running the pcs status command. If you can see the Controller node that you restarted in the output, fencing functions correctly. 4.4. Viewing STONITH device information To see how STONITH configures your fencing devices, run the pcs stonith status --full command from the overcloud. Prerequisites Fencing is deployed and running on the overcloud. For information on how to deploy fencing, see Section 4.2, "Deploying fencing on the overcloud" . Procedure Show the list of Controller nodes and the status of their STONITH devices: This output shows the following information for each resource: IPMI power management service that the fencing device uses to turn the machines on and off as needed, such as fence_ipmilan . IP address of the IPMI interface, such as 10.100.0.51 . User name to log in with, such as admin . Password to use to log in to the node, such as abc . Interval in seconds at which each host is monitored, such as 60s . 4.5. Fencing parameters When you deploy fencing on the overcloud, you generate the fencing.yaml file with the required parameters to configure fencing. The following example shows the structure of the fencing.yaml environment file: This file contains the following parameters: EnableFencing Enables the fencing functionality for Pacemaker-managed nodes. FencingConfig Lists the fencing devices and the parameters for each device: agent : Fencing agent name. host_mac : The mac address in lowercase of the provisioning interface or any other network interface on the server. You can use this as a unique identifier for the fencing device. Important Do not use the MAC address of the IPMI interface. params : List of fencing device parameters. Fencing device parameters Lists the fencing device parameters. This example shows the parameters for the IPMI fencing agent: auth : IPMI authentication type ( md5 , password , or none). ipaddr : IPMI IP address. ipport : IPMI port. login : Username for the IPMI device. passwd : Password for the IPMI device. lanplus : Use lanplus to improve security of connection. privlvl : Privilege level on IPMI device pcmk_host_list : List of Pacemaker hosts. Additional resources Section 4.2, "Deploying fencing on the overcloud" Section 4.1, "Supported fencing agents" 4.6. Additional resources "Configuring fencing in a Red Hat High Availability cluster"
[ "parameter_defaults: EnableFencing: true FencingConfig: devices: level1: - agent: fence_kdump host_mac: aa:bb:cc:dd:ee:ff level2: - agent: fence_ipmilan host_mac: aa:bb:cc:dd:ee:ff params: <parameter>: <value>", "sudo pcs status", "Cluster name: openstackHA Last updated: Wed Jun 24 12:40:27 2015 Last change: Wed Jun 24 11:36:18 2015 Stack: corosync Current DC: lb-c1a2 (2) - partition with quorum Version: 1.1.12-a14efad 3 Nodes configured 141 Resources configured", "sudo pcs property show", "Cluster Properties: cluster-infrastructure: corosync cluster-name: openstackHA dc-version: 1.1.12-a14efad have-watchdog: false stonith-enabled: false", "(undercloud) USD openstack overcloud generate fencing --output fencing.yaml nodes.json", "parameter_defaults: ExtraConfig: pacemaker::corosync::enable_sbd: true", "parameter_defaults: EnableFencing: true FencingConfig: devices: level1: - agent: [VALUE] host_mac: \"aa:bb:cc:dd:ee:ff\" params: <parameter>: <value> level2: - agent: fence_agent2 host_mac: \"aa:bb:cc:dd:ee:ff\" params: <parameter>: <value>", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan -e fencing.yaml", "pcs property set stonith-watchdog-timeout=<interval> pcs property show", "source stackrc metalsmith list | grep controller ssh tripleo-admin@<controller-x_ip> sudo pcs status | grep fence stonith-overcloud-controller-x (stonith:fence_ipmilan): Started overcloud-controller-y", "sudo pcs stonith show <stonith-resource-controller-x>", "source stackrc metalsmith list | grep controller ssh tripleo-admin@<controller-x_ip>", "sudo -i iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT && iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT && iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5016 -j ACCEPT && iptables -A INPUT -p udp -m state --state NEW -m udp --dport 5016 -j ACCEPT && iptables -A INPUT ! -i lo -j REJECT --reject-with icmp-host-prohibited && iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT && iptables -A OUTPUT -p tcp --sport 5016 -j ACCEPT && iptables -A OUTPUT -p udp --sport 5016 -j ACCEPT && iptables -A OUTPUT ! -o lo -j REJECT --reject-with icmp-host-prohibited", "ssh tripleo-admin@<controller-x_ip> less /var/log/cluster/corosync.log (less): /fenc*", "sudo pcs stonith status --full Resource: my-ipmilan-for-controller-0 (class=stonith type=fence_ipmilan) Attributes: pcmk_host_list=overcloud-controller-0 ipaddr=10.100.0.51 login=admin passwd=abc lanplus=1 cipher=3 Operations: monitor interval=60s (my-ipmilan-for-controller-0-monitor-interval-60s) Resource: my-ipmilan-for-controller-1 (class=stonith type=fence_ipmilan) Attributes: pcmk_host_list=overcloud-controller-1 ipaddr= 10.100.0.52 login=admin passwd=abc lanplus=1 cipher=3 Operations: monitor interval=60s (my-ipmilan-for-controller-1-monitor-interval-60s) Resource: my-ipmilan-for-controller-2 (class=stonith type=fence_ipmilan) Attributes: pcmk_host_list=overcloud-controller-2 ipaddr= 10.100.0.53 login=admin passwd=abc lanplus=1 cipher=3 Operations: monitor interval=60s (my-ipmilan-for-controller-2-monitor-interval-60s)", "parameter_defaults: EnableFencing: true FencingConfig: devices: - agent: fence_ipmilan host_mac: \"11:11:11:11:11:11\" params: ipaddr: 10.0.0.101 lanplus: true login: admin passwd: InsertComplexPasswordHere privlvl: administrator" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_high_availability_services/assembly_fencing-controller-nodes_rhosp
Chapter 59. Configuring an embedded process engine and decision engine in Oracle WebLogic Server
Chapter 59. Configuring an embedded process engine and decision engine in Oracle WebLogic Server An embedded engine is a light-weight workflow and rule engine that enables you to execute your decisions and business processes. An embedded engine can be part of a Red Hat Process Automation Manager application or it can be deployed as a service through OpenShift, Kubernetes, and Docker. You can embed an engine in a Red Hat Process Automation Manager application through the API or as a set of contexts and dependency injection (CDI) services. If you intend to use an embedded engine with your Red Hat Process Automation Manager application, you must add Maven dependencies to your project by adding the Red Hat Business Automation bill of materials (BOM) files to the project's pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Procedure Declare the Red Hat Business Automation BOM in the pom.xml file: <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies> Declare dependencies required for your project in the <dependencies> tag. After you import the product BOM into your project, the versions of the user-facing product dependencies are defined so you do not need to specify the <version> sub-element of these <dependency> elements. However, you must use the <dependency> element to declare dependencies which you want to use in your project. For a basic Red Hat Process Automation Manager project, declare the following dependencies, depending on the features that you want to use: Embedded process engine dependencies <!-- Public KIE API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <!-- Core dependencies for process engine --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow-builder</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-bpmn2</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-runtime-manager</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-persistence-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-query-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-audit</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <!-- Dependency needed for default WorkItemHandler implementations. --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-workitems-core</artifactId> </dependency> <!-- Logging dependency. You can use any logging framework compatible with slf4j. --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency> For a Red Hat Process Automation Manager project that uses CDI, you typically declare the following dependencies: CDI-enabled process engine dependencies <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-cdi</artifactId> </dependency> Embedded decision engine dependencies <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency> To use KIE Server, declare the following dependencies: Client application KIE Server dependencies <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency> To create a remote client for Red Hat Process Automation Manager, declare the following dependency: Client dependency <dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency> When creating a JAR file that includes assets, such as rules and process definitions, specify the packaging type for your Maven project as kjar and use org.kie:kie-maven-plugin to process the kjar packaging type located under the <project> element. In the following example, USD{kie.version} is the Maven library version listed in What is the mapping between Red Hat Process Automation Manager and the Maven library version? : <packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build> If you use a process engine or decision engine with persistence support in your project, you must declare the following hibernate dependencies in the dependencyManagement section of your pom.xml file by copying the version.org.hibernate-4ee7 property from the Red Hat Business Automation BOM file: Hibernate dependencies <!-- hibernate dependencies --> <dependencyManagement> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>USD{version.org.hibernate-4ee7}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>USD{version.org.hibernate-4ee7}</version> </dependency> </dependencies> </dependencyManagement>
[ "<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies>", "<!-- Public KIE API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <!-- Core dependencies for process engine --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow-builder</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-bpmn2</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-runtime-manager</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-persistence-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-query-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-audit</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <!-- Dependency needed for default WorkItemHandler implementations. --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-workitems-core</artifactId> </dependency> <!-- Logging dependency. You can use any logging framework compatible with slf4j. --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency>", "<dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-cdi</artifactId> </dependency>", "<dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency>", "<dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency>", "<dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency>", "<packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build>", "<!-- hibernate dependencies --> <dependencyManagement> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>USD{version.org.hibernate-4ee7}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>USD{version.org.hibernate-4ee7}</version> </dependency> </dependencies> </dependencyManagement>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/wls-configure-embedded-engine-proc
Chapter 7. Configuring the discovery image
Chapter 7. Configuring the discovery image The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can use Ignition to customize the discovery image. Note Modifications to the discovery image will not persist in the system. 7.1. Creating an Ignition configuration file Ignition is a low-level system configuration utility, which is part of the temporary initial root filesystem, the initramfs . When Ignition runs on the first boot, it finds configuration data in the Ignition configuration file and applies it to the host before switch_root is called to pivot to the host's root filesystem. Ignition uses a JSON configuration specification file to represent the set of changes that occur on the first boot. Important Ignition versions newer than 3.2 are not supported, and will raise an error. Procedure Create an Ignition file and specify the configuration specification version: USD vim ~/ignition.conf { "ignition": { "version": "3.1.0" } } Add configuration data to the Ignition file. For example, add a password to the core user. Generate a password hash: USD openssl passwd -6 Add the generated password hash to the core user: { "ignition": { "version": "3.1.0" }, "passwd": { "users": [ { "name": "core", "passwordHash": "USD6USDspamUSDM5LGSMGyVD.9XOboxcwrsnwNdF4irpJdAWy.1Ry55syyUiUssIzIAHaOrUHr2zg6ruD8YNBPW9kW0H8EnKXyc1" } ] } } Save the Ignition file and export it to the IGNITION_FILE variable: USD export IGNITION_FILE=~/ignition.conf 7.2. Modifying the discovery image with Ignition Once you create an Ignition configuration file, you can modify the discovery image by patching the infrastructure environment using the Assisted Installer API. Prerequisites If you used the UI to create the cluster, you have set up the API authentication. You have an infrastructure environment and you have exported the infrastructure environment id to the INFRA_ENV_ID variable. You have a valid Ignition file and have exported the file name as USDIGNITION_FILE . Procedure Create an ignition_config_override JSON object and redirect it to a file: USD jq -n \ --arg IGNITION "USD(jq -c . USDIGNITION_FILE)" \ '{ignition_config_override: USDIGNITION}' \ > discovery_ignition.json Refresh the API token: USD source refresh-token Patch the infrastructure environment: USD curl \ --header "Authorization: Bearer USDAPI_TOKEN" \ --header "Content-Type: application/json" \ -XPATCH \ -d @discovery_ignition.json \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID | jq The ignition_config_override object references the Ignition file. Download the updated discovery image.
[ "vim ~/ignition.conf", "{ \"ignition\": { \"version\": \"3.1.0\" } }", "openssl passwd -6", "{ \"ignition\": { \"version\": \"3.1.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"passwordHash\": \"USD6USDspamUSDM5LGSMGyVD.9XOboxcwrsnwNdF4irpJdAWy.1Ry55syyUiUssIzIAHaOrUHr2zg6ruD8YNBPW9kW0H8EnKXyc1\" } ] } }", "export IGNITION_FILE=~/ignition.conf", "jq -n --arg IGNITION \"USD(jq -c . USDIGNITION_FILE)\" '{ignition_config_override: USDIGNITION}' > discovery_ignition.json", "source refresh-token", "curl --header \"Authorization: Bearer USDAPI_TOKEN\" --header \"Content-Type: application/json\" -XPATCH -d @discovery_ignition.json https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID | jq" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/assisted_installer_for_openshift_container_platform/assembly_configuring-the-discovery-image
7.2. Upgrading an in-service integrated environment
7.2. Upgrading an in-service integrated environment Follow the instructions below to upgrade an integrated Red Hat Virtualization and Red Hat Gluster Storage environment that is online and in-service. Upgrade the Red Hat Gluster Storage nodes Perform the following steps for each Red Hat Gluster Storage node, one node at a time. If you have multiple replica sets, upgrade each node in a replica set before moving on to another replica set. Stop all gluster services on the storage node that you want to upgrade by running the following commands as the root user on that node. In the Administration Portal, click Storage Hosts and select the storage node to upgrade. Click Management Maintenance and click OK . Upgrade the storage node using the procedure at Section 5.2, "In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5" . Reboot the storage node. In the Administration Portal, click Storage Hosts and select the storage node. Click Management Activate . Click the name of the storage node Bricks , and verify that the Self-Heal Info column of all bricks is listed as OK before upgrading the storage node. Upgrade Gluster client software on virtualization hosts After upgrading all storage nodes, update the client software on all virtualization hosts. For Red Hat Virtualization hosts In the Administration Portal, click Compute Hosts and select the node to update. On Red Hat Virtualization versions earlier than 4.2, click Management Maintenance and click OK . Click Installation Upgrade and click OK . The node is automatically updated with the latest packages and reactivated when the update is complete. For Red Hat Enterprise Linux hosts In the Administration Portal, click Compute Hosts and select the node to update. Click Management Maintenance and click OK . Subscribe the host to the correct repositories to receive updates. Update the host. In the Administration Portal, click Management Activate .
[ "systemctl stop glusterd pkill glusterfs pkill glusterfsd pgrep gluster", "yum update" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/upgrading-rhv-rhgs-inservice
Chapter 4. JDBC Transactions
Chapter 4. JDBC Transactions 4.1. JDBC Transaction Types The JBoss Data Virtualization JDBC API supports three types of transactions from a client perspective: global transactions, local transactions, and request level transactions. All are implemented by JBoss Data Virtualization as XA transactions. Refer to the JTA specification at http://www.oracle.com/technetwork/java/javaee/tech/jta-138684.html for more information on XA Transactions. Warning The use of global, local, and request level transactions are all mutually exclusive. Request level transactions only apply when not in a global or local transaction. Any attempt to mix global and local transactions concurrently will result in an exception.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/chap-jdbc_transactions
5.358. xfig
5.358. xfig 5.358.1. RHBA-2012:0985 - xfig bug fix update Updated xfig packages that fix one bug are now available for Red Hat Enterprise Linux 6. The xfig packages contain the Xfig editor. Xfig is an open-source vector graphics editor, which allows you to create simple diagrams and figures. Bug Fix BZ# 806689 Security errata RHSA-2012:0095 changed the way ghostscript handles relative paths. Consequently, as Xfig relied on the original ghostscript behavior, it failed to open encapsulated postscript files and returned the execution stack ending with the following error: Xfig was changed to use absolute paths when executing the ghostscript binary. With this update, Xfig opens encapsulated postscript files and includes them in other figures as expected. Users are advised to upgrade to these updated xfig packages, which fix this bug.
[ "Current allocation mode is local Last OS error: 2 GPL Ghostscript 8.70: Unrecoverable error, exit code 1 EPS object read OK, but no preview bitmap found/generated" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/xfig
15.5.4. Local User Options
15.5.4. Local User Options The following lists directives which characterize the way local users access the server. To use these options, the local_enable directive must be set to YES . chmod_enable - When enabled, the FTP command SITE CHMOD is allowed for local users. This command allows the users to change the permissions on files. The default value is YES . chroot_list_enable - When enabled, the local users listed in the file specified in the chroot_list_file directive are placed in a chroot jail upon log in. If enabled in conjunction with the chroot_local_user directive, the local users listed in the file specified in the chroot_list_file directive are not placed in a chroot jail upon log in. The default value is NO . chroot_list_file - Specifies the file containing a list of local users referenced when the chroot_list_enable directive is set to YES . The default value is /etc/vsftpd.chroot_list . chroot_local_user - When enabled, local users are change-rooted to their home directories after logging in. The default value is NO . Warning Enabling chroot_local_user opens up a number of security issues, especially for users with upload privileges. For this reason, it is not recommended. guest_enable - When enabled, all non-anonymous users are logged in as the user guest , which is the local user specified in the guest_username directive. The default value is NO . guest_username - Specifies the username the guest user is mapped to. The default value is ftp . local_root - Specifies the directory vsftpd changes to after a local user logs in. There is no default value for this directive. local_umask - Specifies the umask value for file creation. Note that the default value is in octal form (a numerical system with a base of eight), which includes a "0" prefix. Otherwise the value is treated as a base-10 integer. The default value is 022 . passwd_chroot_enable - When enabled in conjunction with the chroot_local_user directive, vsftpd change-roots local users based on the occurrence of the /./ in the home directory field within /etc/passwd . The default value is NO . user_config_dir - Specifies the path to a directory containing configuration files bearing the name of local system users that contain specific setting for that user. Any directive in the user's configuration file overrides those found in /etc/vsftpd/vsftpd.conf . There is no default value for this directive.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-ftp-vsftpd-conf-opt-usr
Chapter 1. Introduction
Chapter 1. Introduction Red Hat's Trusted Profile Analyzer (RHTPA) is a proactive service that assists in risk management of Open Source Software (OSS) packages and dependencies. The Trusted Profile Analyzer service brings awareness to and remediation of OSS vulnerabilities discovered within the software supply chain. The Red Hat Trusted Profile Analyzer documentation is available here . Items added to the RHTPA 1.2.2 Release Notes: The spog-ui-pod-service pod restarts when launching the Trusted Profile Analyzer console in a web browser
null
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1.2/html/release_notes/introduction