title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Overview of HawtIO | Chapter 1. Overview of HawtIO HawtIO is a diagnostic Console for the Red Hat build of Apache Camel and Red Hat build of AMQ. It is a pluggable Web diagnostic console built with modern Web technologies such as React and PatternFly . HawtIO provides a central interface to examine and manage the details of one or more deployed HawtIO-enabled containers. HawtIO is available when you install HawtIO standalone or use HawtIO on OpenShift. The integrations that you can view and manage in HawtIO depend on the plugins that are running. You can monitor HawtIO and system resources, perform updates, and start or stop services. The pluggable architecture is based on Webpack Module Federation and is highly extensible; you can dynamically extend HawtIO with your plugins or automatically discover plugins inside the JVM. HawtIO has built-in plugins already to make it highly useful out of the box for your JVM application. The plugins include Apache Camel, Connect, JMX, Logs, Runtime, Quartz, and Spring Boot. HawtIO is primarily designed to be used with Camel Quarkus and Camel Spring Boot. It's also a tool for managing microservice applications. HawtIO is cloud-native; it's ready to go over the cloud! You can deploy it to Kubernetes and OpenShift with the HawtIO Operator . Benefits of HawtIO can be listed as follows: Runtime management of JVM via JMX, especially that of Camel applications and AMQ broker, with specialized views Visualization and debugging/tracing of Camel routes Simple managing and monitoring of application metrics | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/hawtio_diagnostic_console_guide/overview-of-hawtio |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/preparing_for_disaster_recovery_with_identity_management/proc_providing-feedback-on-red-hat-documentation_preparing-for-disaster-recovery |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, you can highlight the text in a document and add comments. Follow the steps in the procedure to learn about submitting feedback on Red Hat documentation. Prerequisites Log in to the Red Hat Customer Portal. In the Red Hat Customer Portal, view the document in HTML format. Procedure Click Feedback to see existing reader comments. Note The feedback feature is enabled only in the HTML format. Highlight the section of the document where you want to provide feedback. In the prompt menu that displays near the text you selected, click Add Feedback . A text box opens in the feedback section on the right side of the page. Enter your feedback in the text box and click Submit . You have created a documentation issue. To view the issue, click the issue tracker link in the feedback view. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/jboss_eap_xp_upgrade_and_migration_guide/proc_providing-feedback-on-red-hat-documentation_default |
Getting started | Getting started OpenShift Container Platform 4.17 Getting started in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/getting_started/index |
Chapter 1. About Red Hat OpenShift Dev Spaces | Chapter 1. About Red Hat OpenShift Dev Spaces Red Hat OpenShift Dev Spaces provides web-based development environments on Red Hat OpenShift with an enterprise-level setup: Cloud Development Environments (CDE) server IDEs such as Microsoft Visual Studio Code - Open Source and JetBrains IntelliJ IDEA Community ( Technology Preview ) Containerized environments with popular programming languages, frameworks, and Red Hat technologies Red Hat OpenShift Dev Spaces is well-suited for container-based development. Red Hat OpenShift Dev Spaces 3.16 is based on Eclipse Che 7.90. 1.1. Supported platforms OpenShift Dev Spaces runs on OpenShift 4.12-4.16 on the following CPU architectures: AMD64 and Intel 64 ( x86_64 ) IBM Z ( s390x ) The following CPU architecture requires Openshift 4.13-4.16 to run OpenShift Dev Spaces: IBM Power ( ppc64le ) Additional resources OpenShift Documentation Red Hat OpenShift Dev Spaces administration guide 1.2. Support policy For Red Hat OpenShift Dev Spaces 3.16, Red Hat will provide support for deployment, configuration, and use of the product. Additional resources OpenShift Dev Spaces life-cycle and support policy . 1.3. Differences between Red Hat OpenShift Dev Spaces and Eclipse Che There are some differences between Red Hat OpenShift Dev Spaces and the upstream project on which it is based, Eclipse Che: OpenShift Dev Spaces is supported only on Red Hat OpenShift. OpenShift Dev Spaces is based on Red Hat Enterprise Linux and is regularly updated to include the latest security fixes. OpenShift Dev Spaces provides getting-started samples supported in the air-gap mode with languages and technologies such as Quarkus, Lombok, NodeJS, Python, DotNet, Golang, and C/C++. Community samples are available at the Devfile registry page . OpenShift Dev Spaces uses OpenShift OAuth for user login and management. Red Hat provides licensing and packaging to ensure enterprise-level support for OpenShift Dev Spaces. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/3.16.0_release_notes_and_known_issues/about-devspaces_devspaces |
About | About OpenShift Container Platform 4.9 Introduction to OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/about/index |
Appendix C. Using AMQ Broker with the examples | Appendix C. Using AMQ Broker with the examples The AMQ Core Protocol JMS examples require a running message broker with a queue named exampleQueue . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named exampleQueue . USD <broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2021-05-07 10:16:31 UTC | [
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_core_protocol_jms_client/using_the_broker_with_the_examples |
Integrating partner content | Integrating partner content Red Hat OpenStack Services on OpenShift 18.0 Integrating and certifying third-party software and hardware for Red Hat OpenStack Services on OpenShift OpenStack Documentation Team [email protected] | [
"FROM registry.redhat.io/rhoso/openstack-cinder-volume-rhel9:18.0.0 1 LABEL name=\"rhoso18/openstack-cinder-volume-partnerX-plugin\" maintainer=\"[email protected]\" vendor=\"PartnerX\" summary=\"RHOSO 18.0 cinder-volume PartnerX PluginY\" description=\"RHOSO 18.0 cinder-volume PartnerX PluginY\" 2 Switch to root to install software dependencies USER root Enable a repo to install a package 3 COPY vendorX.repo /etc/yum.repos.d/vendorX.repo RUN dnf clean all && dnf install -y vendorX-plugin Install a package over the network 4 RUN dnf install -y http://vendorX.com/partnerX-plugin.rpm Install a local package 5 COPY partnerX-plugin.rpm /tmp RUN dnf install -y /tmp/partnerX-plugin.rpm && rm -f /tmp/partnerX-plugin.rpm Install a python package from PyPI 6 RUN curl -OL https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py --no-setuptools --no-wheel && pip3 install partnerX-plugin && rm -f get-pip.py Add required license as text file(s) in /licenses directory (GPL, MIT, APACHE, Partner End User Agreement, etc) RUN mkdir /licenses COPY licensing.txt /licenses Switch to cinder user USER cinder",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderVolumeImages: backend-X1: registry.connect.redhat.com/partnerX/openstack-cinder-volume-partnerX-plugin backend-X2: registry.connect.redhat.com/partnerX/openstack-cinder-volume-partnerX-plugin 1 backend-Y: registry.connect.redhat.com/partnerY/openstack-cinder-volume-partnerY-plugin manilaShareImages: 2 backend-Z: registry.connect.redhat.com/partnerZ/openstack-manila-share-partnerZ-plugin",
"apiVersion: v1 kind: Secret metadata: name: cinder-volume-example-config 1 type: Opaque stringData: config.xml: | 2 <example-credentials>example</example-credentials> 3",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderApi: cinderScheduler: cinderVolumes: example: 1 customServiceConfig: | [example] volume_backend_name=example volume_driver=cinder....ExampleDriver networkAttachments: - storage replicas: 1 extraMounts: 2 - extraVol: - mounts: - name: example-config mountPath: /etc/cinder/config.xml 3 subPath: config.xml 4 readOnly: true propagation: - example 5 volumes: - name: example-config secret: secretName: cinder-volume-example-config 6",
"FROM registry.redhat.io/rhoso/openstack-neutron-server-rhel9:18.0.1 1 LABEL name=\"rhoso18/openstack-neutron-server-partnerX-plugin\" maintainer=\"[email protected]\" vendor=\"PartnerX\" summary=\"RHOSO 18.0 neutron-server PartnerX PluginY\" description=\"RHOSO 18.0 neutron-server PartnerX PluginY\" 2 Switch to root to install software dependencies USER root Enable a repo to install a package 3 COPY vendorX.repo /etc/yum.repos.d/vendorX.repo RUN dnf clean all && dnf install -y vendorX-plugin Install a package over the network 4 RUN dnf install -y http://vendorX.com/partnerX-plugin.rpm Install a local package 5 COPY partnerX-plugin.rpm /tmp RUN dnf install -y /tmp/partnerX-plugin.rpm && rm -f /tmp/partnerX-plugin.rpm Install a python package from PyPI 6 RUN curl -OL https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py --no-setuptools --no-wheel && pip3 install partnerX-plugin && rm -f get-pip.py Add required license as text file(s) in /licenses directory (GPL, MIT, APACHE, Partner End User Agreement, etc) RUN mkdir /licenses COPY licensing.txt /licenses Switch to neutron user USER neutron",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: neutronAPIImage: registry.connect.redhat.com/partnerX/openstack-neutron-server-partnerX-plugin:18.0.1",
"apiVersion: v1 kind: Secret metadata: name: cinder-volume-example-config 1 type: Opaque stringData: partner_config.ini: | 2 example_credentials=example 3",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: neutron: template: databaseAccount: neutron databaseInstance: openstack memcachedInstance: memcached networkAttachments: - internalapi passwordSelectors: service: NeutronPassword rabbitMqClusterName: rabbitmq replicas: 1 secret: osp-secret serviceUser: neutron ml2MechanismDrivers: - partner-mech-driver 1 customServiceConfig: | 2 [example] foo=bar extraMounts: 3 - extraVol: - mounts: - name: partner-config mountPath: /etc/neutron/neutron.conf.d/partner_config.ini 4 subPath: partner_config.ini 5 readOnly: true volumes: - name: partner-config secret: secretName: neutron-server-partnerX-config 6",
"FROM quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest COPY neutron_agent_partner_role /usr/share/ansible/roles/neutron_agent_partner_role 1 COPY playbooks/neutron_agent_partner.yaml /usr/share/ansible/collections/ansible_collections/osp/edpm/playbooks/ 2",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: neutron-custom-partner-service 1 spec: label: dataplane-deployment-neutron-custom-partner-service playbook: osp.edpm.neutron_agent_partner 2 openStackAnsibleEERunnerImage: openstack-ansibleee-partnerX-runner:latest 3",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm spec: services: - download-cache - bootstrap - configure-network - validate-network - install-os - configure-os - run-os - neutron-custom-partner-service 1 - libvirt - nova nodes: edpm-compute: ansible: ansibleHost: 172.20.12.67 ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret ansibleUser: cloud-admin ansibleVars: ansible_ssh_transfer_method: scp ctlplane_ip: 172.20.12.67 external_ip: 172.20.12.76 fqdn_internalapi: edpm-compute-1.example.com internalapi_ip: 172.17.0.101 storage_ip: 172.18.0.101 tenant_ip: 172.10.0.101 hostName: edpm-compute-0 networkConfig: {} nova: cellName: cell1 deploy: true novaInstance: nova"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html-single/integrating_partner_content/index |
Chapter 1. The OpenStack Client | Chapter 1. The OpenStack Client The openstack client is a common OpenStack command-line interface (CLI). This chapter documents the main options for openstack version 4.0.0 . :leveloffset: +1 Command-line interface to the OpenStack APIs Usage: Table 1.1. Optional Arguments Value Summary --version Show program's version number and exit -v, --verbose Increase verbosity of output. can be repeated. -q, --quiet Suppress output except warnings and errors. --log-file LOG_FILE Specify a file to log output. disabled by default. -h, --help Show help message and exit. --debug Show tracebacks on errors. --os-cloud <cloud-config-name> Cloud name in clouds.yaml (env: os_cloud) --os-region-name <auth-region-name> Authentication region name (env: os_region_name) --os-cacert <ca-bundle-file> Ca certificate bundle file (env: os_cacert) --os-cert <certificate-file> Client certificate bundle file (env: os_cert) --os-key <key-file> Client certificate key file (env: os_key) --verify Verify server certificate (default) --insecure Disable server certificate verification --os-default-domain <auth-domain> Default domain id, default=default. (env: OS_DEFAULT_DOMAIN) --os-interface <interface> Select an interface type. valid interface types: [admin, public, internal]. default=public, (Env: OS_INTERFACE) --os-service-provider <service_provider> Authenticate with and perform the command on a service provider using Keystone-to-keystone federation. Must also specify the remote project option. --os-remote-project-name <remote_project_name> Project name when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-id <remote_project_id> Project id when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-domain-name <remote_project_domain_name> Domain name of the project when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-domain-id <remote_project_domain_id> Domain id of the project when authenticating to a service provider if using Keystone-to-Keystone federation. --timing Print api call timing info --os-beta-command Enable beta commands which are subject to change --os-profile hmac-key Hmac key for encrypting profiling context data --os-compute-api-version <compute-api-version> Compute api version, default=2.1 (env: OS_COMPUTE_API_VERSION) --os-identity-api-version <identity-api-version> Identity api version, default=3 (env: OS_IDENTITY_API_VERSION) --os-image-api-version <image-api-version> Image api version, default=2 (env: OS_IMAGE_API_VERSION) --os-network-api-version <network-api-version> Network api version, default=2.0 (env: OS_NETWORK_API_VERSION) --os-object-api-version <object-api-version> Object api version, default=1 (env: OS_OBJECT_API_VERSION) --os-volume-api-version <volume-api-version> Volume api version, default=3 (env: OS_VOLUME_API_VERSION) --os-queues-api-version <queues-api-version> Queues api version, default=2 (env: OS_QUEUES_API_VERSION) --os-database-api-version <database-api-version> Database api version, default=1 (env: OS_DATABASE_API_VERSION) --os-tripleoclient-api-version <tripleoclient-api-version> Tripleo client api version, default=1 (env: OS_TRIPLEOCLIENT_API_VERSION) --os-data-processing-api-version <data-processing-api-version> Data processing api version, default=1.1 (env: OS_DATA_PROCESSING_API_VERSION) --os-data-processing-url OS_DATA_PROCESSING_URL Data processing api url, (env: OS_DATA_PROCESSING_API_URL) --os-loadbalancer-api-version <loadbalancer-api-version> Osc plugin api version, default=2.0 (env: OS_LOADBALANCER_API_VERSION) --os-workflow-api-version <workflow-api-version> Workflow api version, default=2 (env: OS_WORKFLOW_API_VERSION) --os-container-infra-api-version <container-infra-api-version> Container-infra api version, default=1 (env: OS_CONTAINER_INFRA_API_VERSION) --os-baremetal-api-version <baremetal-api-version> Bare metal api version, default="latest" (the maximum version supported by both the client and the server). (Env: OS_BAREMETAL_API_VERSION) --inspector-api-version INSPECTOR_API_VERSION Inspector api version, only 1 is supported now (env: INSPECTOR_VERSION). --inspector-url INSPECTOR_URL Inspector url, defaults to localhost (env: INSPECTOR_URL). --os-orchestration-api-version <orchestration-api-version> Orchestration api version, default=1 (env: OS_ORCHESTRATION_API_VERSION) --os-dns-api-version <dns-api-version> Dns api version, default=2 (env: os_dns_api_version) --os-key-manager-api-version <key-manager-api-version> Barbican api version, default=1 (env: OS_KEY_MANAGER_API_VERSION) --os-metrics-api-version <metrics-api-version> Metrics api version, default=1 (env: OS_METRICS_API_VERSION) --os-alarming-api-version <alarming-api-version> Queues api version, default=2 (env: OS_ALARMING_API_VERSION) --os-auth-type <auth-type> Select an authentication type. available types: token, v3totp, none, password, v3password, v3applicationcredential, v2password, v3oidcpassword, aodh-noauth, gnocchi-noauth, v3multifactor, v3oidcaccesstoken, v3oidcauthcode, noauth, v1password, v3adfspassword, gnocchi-basic, v3samlpassword, v3oidcclientcredentials, v2token, v3token, admin_token, v3oauth1, v3tokenlessauth. Default: selected based on --os-username/--os-token (Env: OS_AUTH_TYPE) --os-auth-url <auth-auth-url> With token: authentication url with v3totp: Authentication URL With password: Authentication URL With v3password: Authentication URL With v3applicationcredential: Authentication URL With v2password: Authentication URL With v3oidcpassword: Authentication URL With v3multifactor: Authentication URL With v3oidcaccesstoken: Authentication URL With v3oidcauthcode: Authentication URL With v1password: Authentication URL With v3adfspassword: Authentication URL With v3samlpassword: Authentication URL With v3oidcclientcredentials: Authentication URL With v2token: Authentication URL With v3token: Authentication URL With v3oauth1: Authentication URL With v3tokenlessauth: Authentication URL (Env: OS_AUTH_URL) --os-system-scope <auth-system-scope> With token: scope for system operations with v3totp: Scope for system operations With password: Scope for system operations With v3password: Scope for system operations With v3applicationcredential: Scope for system operations With v3oidcpassword: Scope for system operations With v3multifactor: Scope for system operations With v3oidcaccesstoken: Scope for system operations With v3oidcauthcode: Scope for system operations With v3adfspassword: Scope for system operations With v3samlpassword: Scope for system operations With v3oidcclientcredentials: Scope for system operations With v3token: Scope for system operations (Env: OS_SYSTEM_SCOPE) --os-domain-id <auth-domain-id> With token: domain id to scope to with v3totp: domain ID to scope to With password: Domain ID to scope to With v3password: Domain ID to scope to With v3applicationcredential: Domain ID to scope to With v3oidcpassword: Domain ID to scope to With v3multifactor: Domain ID to scope to With v3oidcaccesstoken: Domain ID to scope to With v3oidcauthcode: Domain ID to scope to With v3adfspassword: Domain ID to scope to With v3samlpassword: Domain ID to scope to With v3oidcclientcredentials: Domain ID to scope to With v3token: Domain ID to scope to With v3tokenlessauth: Domain ID to scope to (Env: OS_DOMAIN_ID) --os-domain-name <auth-domain-name> With token: domain name to scope to with v3totp: Domain name to scope to With password: Domain name to scope to With v3password: Domain name to scope to With v3applicationcredential: Domain name to scope to With v3oidcpassword: Domain name to scope to With v3multifactor: Domain name to scope to With v3oidcaccesstoken: Domain name to scope to With v3oidcauthcode: Domain name to scope to With v3adfspassword: Domain name to scope to With v3samlpassword: Domain name to scope to With v3oidcclientcredentials: Domain name to scope to With v3token: Domain name to scope to With v3tokenlessauth: Domain name to scope to (Env: OS_DOMAIN_NAME) --os-project-id <auth-project-id> With token: project id to scope to with v3totp: Project ID to scope to With password: Project ID to scope to With v3password: Project ID to scope to With v3applicationcredential: Project ID to scope to With v3oidcpassword: Project ID to scope to With aodh- noauth: Project ID With gnocchi-noauth: Project ID With v3multifactor: Project ID to scope to With v3oidcaccesstoken: Project ID to scope to With v3oidcauthcode: Project ID to scope to With noauth: Project ID With v3adfspassword: Project ID to scope to With v3samlpassword: Project ID to scope to With v3oidcclientcredentials: Project ID to scope to With v3token: Project ID to scope to With v3tokenlessauth: Project ID to scope to (Env: OS_PROJECT_ID) --os-project-name <auth-project-name> With token: project name to scope to with v3totp: Project name to scope to With password: Project name to scope to With v3password: Project name to scope to With v3applicationcredential: Project name to scope to With v3oidcpassword: Project name to scope to With v3multifactor: Project name to scope to With v3oidcaccesstoken: Project name to scope to With v3oidcauthcode: Project name to scope to With v1password: Swift account to use With v3adfspassword: Project name to scope to With v3samlpassword: Project name to scope to With v3oidcclientcredentials: Project name to scope to With v3token: Project name to scope to With v3tokenlessauth: Project name to scope to (Env: OS_PROJECT_NAME) --os-project-domain-id <auth-project-domain-id> With token: domain id containing project with v3totp: Domain ID containing project With password: Domain ID containing project With v3password: Domain ID containing project With v3applicationcredential: Domain ID containing project With v3oidcpassword: Domain ID containing project With v3multifactor: Domain ID containing project With v3oidcaccesstoken: Domain ID containing project With v3oidcauthcode: Domain ID containing project With v3adfspassword: Domain ID containing project With v3samlpassword: Domain ID containing project With v3oidcclientcredentials: Domain ID containing project With v3token: Domain ID containing project With v3tokenlessauth: Domain ID containing project (Env: OS_PROJECT_DOMAIN_ID) --os-project-domain-name <auth-project-domain-name> With token: domain name containing project with v3totp: Domain name containing project With password: Domain name containing project With v3password: Domain name containing project With v3applicationcredential: Domain name containing project With v3oidcpassword: Domain name containing project With v3multifactor: Domain name containing project With v3oidcaccesstoken: Domain name containing project With v3oidcauthcode: Domain name containing project With v3adfspassword: Domain name containing project With v3samlpassword: Domain name containing project With v3oidcclientcredentials: Domain name containing project With v3token: Domain name containing project With v3tokenlessauth: Domain name containing project (Env: OS_PROJECT_DOMAIN_NAME) --os-trust-id <auth-trust-id> With token: trust id with v3totp: trust id with password: Trust ID With v3password: Trust ID With v3applicationcredential: Trust ID With v2password: Trust ID With v3oidcpassword: Trust ID With v3multifactor: Trust ID With v3oidcaccesstoken: Trust ID With v3oidcauthcode: Trust ID With v3adfspassword: Trust ID With v3samlpassword: Trust ID With v3oidcclientcredentials: Trust ID With v2token: Trust ID With v3token: Trust ID (Env: OS_TRUST_ID) --os-default-domain-id <auth-default-domain-id> With token: optional domain id to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. With password: Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. (Env: OS_DEFAULT_DOMAIN_ID) --os-default-domain-name <auth-default-domain-name> With token: optional domain name to use with v3 api and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. With password: Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. (Env: OS_DEFAULT_DOMAIN_NAME) --os-token <auth-token> With token: token to authenticate with with v2token: Token With v3token: Token to authenticate with With admin_token: The token that will always be used (Env: OS_TOKEN) --os-user-id <auth-user-id> With v3totp: user id with password: user id with v3password: User ID With v3applicationcredential: User ID With v2password: User ID to login with With aodh- noauth: User ID With gnocchi-noauth: User ID With noauth: User ID (Env: OS_USER_ID) --os-username <auth-username> With v3totp: username with password: username with v3password: Username With v3applicationcredential: Username With v2password: Username to login with With v3oidcpassword: Username With v1password: Username to login with With v3adfspassword: Username With v3samlpassword: Username (Env: OS_USERNAME) --os-user-domain-id <auth-user-domain-id> With v3totp: user's domain id with password: user's domain id With v3password: User's domain id With v3applicationcredential: User's domain id (Env: OS_USER_DOMAIN_ID) --os-user-domain-name <auth-user-domain-name> With v3totp: user's domain name with password: user's domain name With v3password: User's domain name With v3applicationcredential: User's domain name (Env: OS_USER_DOMAIN_NAME) --os-passcode <auth-passcode> With v3totp: user's totp passcode (env: os_passcode) --os-endpoint <auth-endpoint> With none: the endpoint that will always be used with gnocchi-noauth: Gnocchi endpoint With noauth: Cinder endpoint With gnocchi-basic: Gnocchi endpoint With admin_token: The endpoint that will always be used (Env: OS_ENDPOINT) --os-password <auth-password> With password: user's password with v3password: user's password With v2password: Password to use With v3oidcpassword: Password With v1password: Password to use With v3adfspassword: Password With v3samlpassword: Password (Env: OS_PASSWORD) --os-application-credential-secret <auth-application-credential-secret> With v3applicationcredential: application credential auth secret (Env: OS_APPLICATION_CREDENTIAL_SECRET) --os-application-credential-id <auth-application-credential-id> With v3applicationcredential: application credential ID (Env: OS_APPLICATION_CREDENTIAL_ID) --os-application-credential-name <auth-application-credential-name> With v3applicationcredential: application credential name (Env: OS_APPLICATION_CREDENTIAL_NAME) --os-identity-provider <auth-identity-provider> With v3oidcpassword: identity provider's name with v3oidcaccesstoken: Identity Provider's name With v3oidcauthcode: Identity Provider's name With v3adfspassword: Identity Provider's name With v3samlpassword: Identity Provider's name With v3oidcclientcredentials: Identity Provider's name (Env: OS_IDENTITY_PROVIDER) --os-protocol <auth-protocol> With v3oidcpassword: protocol for federated plugin With v3oidcaccesstoken: Protocol for federated plugin With v3oidcauthcode: Protocol for federated plugin With v3adfspassword: Protocol for federated plugin With v3samlpassword: Protocol for federated plugin With v3oidcclientcredentials: Protocol for federated plugin (Env: OS_PROTOCOL) --os-client-id <auth-client-id> With v3oidcpassword: oauth 2.0 client id with v3oidcauthcode: OAuth 2.0 Client ID With v3oidcclientcredentials: OAuth 2.0 Client ID (Env: OS_CLIENT_ID) --os-client-secret <auth-client-secret> With v3oidcpassword: oauth 2.0 client secret with v3oidcauthcode: OAuth 2.0 Client Secret With v3oidcclientcredentials: OAuth 2.0 Client Secret (Env: OS_CLIENT_SECRET) --os-openid-scope <auth-openid-scope> With v3oidcpassword: openid connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. With v3oidcauthcode: OpenID Connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. With v3oidcclientcredentials: OpenID Connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. (Env: OS_OPENID_SCOPE) --os-access-token-endpoint <auth-access-token-endpoint> With v3oidcpassword: openid connect provider token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. With v3oidcauthcode: OpenID Connect Provider Token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. With v3oidcclientcredentials: OpenID Connect Provider Token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. (Env: OS_ACCESS_TOKEN_ENDPOINT) --os-discovery-endpoint <auth-discovery-endpoint> With v3oidcpassword: openid connect discovery document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well-known/openid- configuration With v3oidcauthcode: OpenID Connect Discovery Document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well-known/openid- configuration With v3oidcclientcredentials: OpenID Connect Discovery Document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well- known/openid-configuration (Env: OS_DISCOVERY_ENDPOINT) --os-access-token-type <auth-access-token-type> With v3oidcpassword: oauth 2.0 authorization server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" With v3oidcauthcode: OAuth 2.0 Authorization Server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" With v3oidcclientcredentials: OAuth 2.0 Authorization Server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" (Env: OS_ACCESS_TOKEN_TYPE) --os-roles <auth-roles> With aodh-noauth: roles with gnocchi-noauth: roles (Env: OS_ROLES) --os-aodh-endpoint <auth-aodh-endpoint> With aodh-noauth: aodh endpoint (env: OS_AODH_ENDPOINT) --os-auth-methods <auth-auth-methods> With v3multifactor: methods to authenticate with. (Env: OS_AUTH_METHODS) --os-access-token <auth-access-token> With v3oidcaccesstoken: oauth 2.0 access token (env: OS_ACCESS_TOKEN) --os-redirect-uri <auth-redirect-uri> With v3oidcauthcode: openid connect redirect url (env: OS_REDIRECT_URI) --os-code <auth-code> With v3oidcauthcode: oauth 2.0 authorization code (Env: OS_CODE) --os-identity-provider-url <auth-identity-provider-url> With v3adfspassword: an identity provider url, where the SAML authentication request will be sent. With v3samlpassword: An Identity Provider URL, where the SAML2 authentication request will be sent. (Env: OS_IDENTITY_PROVIDER_URL) --os-service-provider-endpoint <auth-service-provider-endpoint> With v3adfspassword: service provider's endpoint (env: OS_SERVICE_PROVIDER_ENDPOINT) --os-service-provider-entity-id <auth-service-provider-entity-id> With v3adfspassword: service provider's saml entity id (Env: OS_SERVICE_PROVIDER_ENTITY_ID) --os-user <auth-user> With gnocchi-basic: user (env: os_user) --os-consumer-key <auth-consumer-key> With v3oauth1: oauth consumer id/key (env: OS_CONSUMER_KEY) --os-consumer-secret <auth-consumer-secret> With v3oauth1: oauth consumer secret (env: OS_CONSUMER_SECRET) --os-access-key <auth-access-key> With v3oauth1: oauth access key (env: os_access_key) --os-access-secret <auth-access-secret> With v3oauth1: oauth access secret (env: OS_ACCESS_SECRET) | [
"openstack [--version] [-v | -q] [--log-file LOG_FILE] [-h] [--debug] [--os-cloud <cloud-config-name>] [--os-region-name <auth-region-name>] [--os-cacert <ca-bundle-file>] [--os-cert <certificate-file>] [--os-key <key-file>] [--verify | --insecure] [--os-default-domain <auth-domain>] [--os-interface <interface>] [--os-service-provider <service_provider>] [--os-remote-project-name <remote_project_name> | --os-remote-project-id <remote_project_id>] [--os-remote-project-domain-name <remote_project_domain_name> | --os-remote-project-domain-id <remote_project_domain_id>] [--timing] [--os-beta-command] [--os-profile hmac-key] [--os-compute-api-version <compute-api-version>] [--os-identity-api-version <identity-api-version>] [--os-image-api-version <image-api-version>] [--os-network-api-version <network-api-version>] [--os-object-api-version <object-api-version>] [--os-volume-api-version <volume-api-version>] [--os-queues-api-version <queues-api-version>] [--os-database-api-version <database-api-version>] [--os-tripleoclient-api-version <tripleoclient-api-version>] [--os-data-processing-api-version <data-processing-api-version>] [--os-data-processing-url OS_DATA_PROCESSING_URL] [--os-loadbalancer-api-version <loadbalancer-api-version>] [--os-workflow-api-version <workflow-api-version>] [--os-container-infra-api-version <container-infra-api-version>] [--os-baremetal-api-version <baremetal-api-version>] [--inspector-api-version INSPECTOR_API_VERSION] [--inspector-url INSPECTOR_URL] [--os-orchestration-api-version <orchestration-api-version>] [--os-dns-api-version <dns-api-version>] [--os-key-manager-api-version <key-manager-api-version>] [--os-metrics-api-version <metrics-api-version>] [--os-alarming-api-version <alarming-api-version>] [--os-auth-type <auth-type>] [--os-auth-url <auth-auth-url>] [--os-system-scope <auth-system-scope>] [--os-domain-id <auth-domain-id>] [--os-domain-name <auth-domain-name>] [--os-project-id <auth-project-id>] [--os-project-name <auth-project-name>] [--os-project-domain-id <auth-project-domain-id>] [--os-project-domain-name <auth-project-domain-name>] [--os-trust-id <auth-trust-id>] [--os-default-domain-id <auth-default-domain-id>] [--os-default-domain-name <auth-default-domain-name>] [--os-token <auth-token>] [--os-user-id <auth-user-id>] [--os-username <auth-username>] [--os-user-domain-id <auth-user-domain-id>] [--os-user-domain-name <auth-user-domain-name>] [--os-passcode <auth-passcode>] [--os-endpoint <auth-endpoint>] [--os-password <auth-password>] [--os-application-credential-secret <auth-application-credential-secret>] [--os-application-credential-id <auth-application-credential-id>] [--os-application-credential-name <auth-application-credential-name>] [--os-identity-provider <auth-identity-provider>] [--os-protocol <auth-protocol>] [--os-client-id <auth-client-id>] [--os-client-secret <auth-client-secret>] [--os-openid-scope <auth-openid-scope>] [--os-access-token-endpoint <auth-access-token-endpoint>] [--os-discovery-endpoint <auth-discovery-endpoint>] [--os-access-token-type <auth-access-token-type>] [--os-roles <auth-roles>] [--os-aodh-endpoint <auth-aodh-endpoint>] [--os-auth-methods <auth-auth-methods>] [--os-access-token <auth-access-token>] [--os-redirect-uri <auth-redirect-uri>] [--os-code <auth-code>] [--os-identity-provider-url <auth-identity-provider-url>] [--os-service-provider-endpoint <auth-service-provider-endpoint>] [--os-service-provider-entity-id <auth-service-provider-entity-id>] [--os-user <auth-user>] [--os-consumer-key <auth-consumer-key>] [--os-consumer-secret <auth-consumer-secret>] [--os-access-key <auth-access-key>] [--os-access-secret <auth-access-secret>]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/the_openstack_client |
Part II. Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 | Part II. Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 This document describes how to install Red Hat Process Automation Manager on a Red Hat JBoss Enterprise Application Platform 7.4 instance. Prerequisites You have reviewed the information in Planning a Red Hat Process Automation Manager installation . You have installed the latest patch release of Red Hat JBoss Enterprise Application Platform 7.4. For information about installing Red Hat JBoss EAP, see the Red Hat JBoss Enterprise Application Platform installation guide . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/assembly-install-on-eap |
Appendix A. List of tickets by component | Appendix A. List of tickets by component Bugzilla and JIRA IDs are listed in this document for reference. Bugzilla bugs that are publicly accessible include a link to the ticket. Component Tickets 389-ds-base BZ#2052527 , BZ#2057063 , BZ#2057066 , BZ#1872451 , BZ#2053204 , BZ#2063140 , BZ#2047175 NetworkManager BZ#2068525, BZ#2059608 , BZ#2030997 , BZ#2079849 , BZ#2097293 , BZ#2029636 , BZ#1894877, BZ#2151040 anaconda BZ#2059414, BZ#2053710 , BZ#2082132, BZ#2050140 , BZ#1877697, BZ#1914955 , BZ#1929105 , BZ#1997832, BZ#2052938 , BZ#2107346 , BZ#2125542 , BZ#2115783 ansible-collection-microsoft-sql BZ#2066337 ansible-collection-redhat-rhel_mgmt BZ#2112434 ansible-freeipa BZ#2076567 bind BZ#1984982 catatonit BZ#2074193 chrony BZ#2047415 , BZ#2051441 clevis BZ#2107078 cloud-init BZ#1750862 cockpit-appstream BZ#2030836 cockpit BZ#2056786 cronie BZ#2090691 crypto-policies BZ#2102774 , BZ#2070604 cyrus-sasl BZ#1995600 device-mapper-multipath BZ#2084365 , BZ#2033080, BZ#2011699 distribution BZ#2063773 dnf-plugins-core BZ#2066646 dnf BZ#2053014 , BZ#2073510 dotnet7.0 BZ#2112027 dyninst BZ#2057675 edk2 BZ#1935497 elfutils BZ#2088774 fapolicyd BZ#2100041 , BZ#2054740 , BZ#2070655 firefox BZ#2013247 firewalld BZ#2040689 , BZ#2039542 frr BZ#2069563 gcc-toolset-12-annobin BZ#2077438 gcc-toolset-12-binutils BZ#2077445 gcc-toolset-12-gcc BZ#2077465 gcc-toolset-12-gdb BZ#2077494 gcc BZ#2063255 gdb BZ#1870017 gdm BZ#2097308 gimp BZ#2047161 glibc BZ#2033683 , BZ#2096191 , BZ#2063142 , BZ#2077838, BZ#2085529 , BZ#2003291 , BZ#2091549 gnome-settings-daemon BZ#2100467 gnupg2 BZ#2070722 , BZ#2073567 gnutls BZ#2042009 golang BZ#2075169, BZ#2111072 , BZ#2092016 grub2 BZ#2074761, BZ#2026579 grubby BZ#1978226 , BZ#1969362 , BZ#2127453 httpd BZ#2079939 , BZ#2065677 ipa BZ#747959 , BZ#2091988 , BZ#2083218 , BZ#2100227 , BZ#2084180 , BZ#2084166 , BZ#2069202 , BZ#2057471 , BZ#2124243 , BZ#2089907 jmc-core BZ#1980981 kdump-anaconda-addon BZ#1959203, BZ#2017401 kernel-rt BZ#2061574 kernel JIRA:RHELPLAN-117713, BZ#2027894, BZ#2066451, BZ#2079368, BZ#2065226, BZ#2013413, BZ#2069045, BZ#2001936, BZ#2097188, BZ#2096127, BZ#2054379, BZ#2073541, BZ#2030922, BZ#1945040 , BZ#2100898, BZ#2068432, BZ#2046472, BZ#1613522, BZ#1874182, BZ#1995338, BZ#1570255, BZ#2023416, BZ#2021672, BZ#2000616, BZ#2013650, BZ#2132480, BZ#2060150, BZ#2059545, BZ#2069758, BZ#1960467, BZ#2005173, BZ#2129288 kexec-tools BZ#2064708 , BZ#2065013 keylime BZ#2138167 , BZ#2140670 , BZ#2142009 kmod-kvdo BZ#2064802 kmod BZ#2103605 krb5 BZ#2068935 , BZ#2106043 , BZ#2060798 , BZ#2077450 , BZ#2106296 , BZ#2060421 libdnf BZ#2108969 libnvme BZ#2099619 libsepol BZ#2069718 , BZ#2079276 libvirt BZ#2064194, BZ#2014487 libvpd BZ#2051288 libxcrypt BZ#2034569 llvm-toolset BZ#2061041 lsvpd BZ#2051289 lvm2 BZ#2038183 maven BZ#2083112 mysql BZ#1991500 nfs-utils BZ#2081114 nmstate BZ#2084474 , BZ#2082043 nodejs BZ#2083072 nss BZ#2091905 nvme-cli BZ#2090121 nvme-stas BZ#1893841 open-vm-tools BZ#2061193, BZ#2037657 opencryptoki BZ#2044179 openscap BZ#2109485 openssh BZ#2066882 , BZ#2087121 , BZ#2056884 openssl BZ#2060510, BZ#2053289 , BZ#2066412, BZ#2063947 , BZ#2004915 , BZ#2058663 , BZ#1975836 , BZ#1681178, BZ#1685470, BZ#2060044 , BZ#2071631 pacemaker BZ#2121838 , BZ#2072108 pause-container BZ#2106816 pcre2 BZ#2086494 pcs BZ#2024522 , BZ#2054671 , BZ#2058251 , BZ#2058252 , BZ#2058246 , BZ#2058243 , BZ#1301204 php BZ#2070040 pki-core BZ#2084181 podman BZ#2097708 , BZ#2027576 , BZ#2069279 policycoreutils BZ#2115242 powerpc-utils BZ#1920964 ppc64-diag BZ#2051286 procps-ng BZ#2052536 , BZ#2003033 pykickstart BZ#2083269 qemu-kvm BZ#2044218, BZ#1965079 , BZ#1951814 , BZ#2060839 , BZ#2014229 , BZ#2052424 , BZ#1817965 , BZ#1789206 , BZ#2090712 , BZ#2020146 rear BZ#2111059 , BZ#2097437 , BZ#2115958 , BZ#2083272 , BZ#2120736, BZ#2119501 resource-agents BZ#1826455 rhel-system-roles BZ#2072385 , BZ#2086965 , BZ#2065337 , BZ#2079622 , BZ#2043010 , BZ#2065383 , BZ#2112145 , BZ#2052081 , BZ#2052086 , BZ#2065392 , BZ#2072742 , BZ#2072745 , BZ#2072746 , BZ#2075119 , BZ#2078989 , BZ#2079627 , BZ#2093423 , BZ#2100292 , BZ#2100942 , BZ#2115154 , BZ#2115157 , BZ#2115152 , BZ#2051737 , BZ#2065382 , BZ#2065394 , BZ#2115886 , BZ#2100605 , BZ#2060523 , BZ#2060525 , BZ#2065393 , BZ#2070462, BZ#2083376 , BZ#2083410 , BZ#2100286 , BZ#2109998 , BZ#2115156 , BZ#2071804 , BZ#2100294 , BZ#1999770 rsyslog BZ#2064318 rust BZ#2075337 s390utils BZ#1870699, BZ#1932480 samba BZ#2077487 , Jira:RHELDOCS-16612 sblim-wbemcli BZ#2083577 scap-security-guide BZ#2070563 , BZ#2120978 , BZ#2038978 selinux-policy BZ#1965013, BZ#2081425, BZ#2076681 , BZ#2064274 sos BZ#1869561 sssd BZ#1978119, BZ#2065693 , BZ#2056482 stalld BZ#2107275 stratisd BZ#1990905 , BZ#2040352 , BZ#2039960 , BZ#2007018 , BZ#2005110 , BZ#2041558 subscription-manager BZ#2092014 , BZ#2136694 systemd BZ#2018112 systemtap BZ#2083727 tigervnc BZ#2060308 tpm2-tools BZ#2090748 tuned BZ#2093847 ubi8-container BZ#2120378 udisks2 BZ#1983602 unbound BZ#2087120 , BZ#2071543 , BZ#2070495 valgrind BZ#1993976 virt-who BZ#2054504 virtio-win BZ#1969724 , BZ#2084003 whois BZ#2054043 xmlstarlet BZ#2069689 xorg-x11-server BZ#1894612 other JIRA:RHELPLAN-92522, BZ#2125549 , BZ#2128016, BZ#1937031, JIRA:RHELPLAN-121982, JIRA:RHELPLAN-95456, JIRA:RHELPLAN-122321, JIRA:RHELPLAN-118462, JIRA:RHELPLAN-101140, JIRA:RHELPLAN-132023, JIRA:RHELPLAN-123369, JIRA:RHELPLAN-117109, JIRA:RHELPLAN-130379, BZ#2049492 , JIRA:RHELPLAN-130376, JIRA:RHELPLAN-122735, BZ#2070793 , BZ#2122716 , JIRA:RHELPLAN-123368, JIRA:RHELPLAN-135601, JIRA:RHELPLAN-135602, BZ#2139877 , JIRA:RHELPLAN-122776, JIRA:RHELPLAN-121180, BZ#2094015 , JIRA:RHELPLAN-109067, JIRA:RHELPLAN-115603, JIRA:RHELPLAN-65217, BZ#2020529 , BZ#2030412 , BZ#2046653, JIRA:RHELPLAN-103993, JIRA:RHELPLAN-122345, JIRA:RHELPLAN-129327, JIRA:RHELPLAN-74672, BZ#1927780, JIRA:RHELPLAN-110763, BZ#1935544, BZ#2089200 , JIRA:RHELPLAN-15509, JIRA:RHELPLAN-99136, JIRA:RHELPLAN-103232, BZ#1899167, BZ#1979521 , JIRA:RHELPLAN-100087, JIRA:RHELPLAN-100639, JIRA:RHELPLAN-10304, BZ#2058153 , JIRA:RHELPLAN-113995, JIRA:RHELPLAN-121048, JIRA:RHELPLAN-98983, JIRA:RHELPLAN-131882, JIRA:RHELPLAN-137660, BZ#1640697, BZ#1697896, BZ#2047713 , JIRA:RHELPLAN-96940, JIRA:RHELPLAN-117234, JIRA:RHELPLAN-119001, JIRA:RHELPLAN-119852, BZ#2077767, BZ#2053598, BZ#2082303 , JIRA:RHELPLAN-121049, JIRA:RHELPLAN-109613, JIRA:RHELPLAN-135600, BZ#2149172 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.1_release_notes/list_of_tickets_by_component |
Chapter 4. ResourceAccessReview [authorization.openshift.io/v1] | Chapter 4. ResourceAccessReview [authorization.openshift.io/v1] Description ResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" verb string Verb is one of: get, list, watch, create, update, delete 4.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/resourceaccessreviews POST : create a ResourceAccessReview 4.2.1. /apis/authorization.openshift.io/v1/resourceaccessreviews Table 4.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a ResourceAccessReview Table 4.2. Body parameters Parameter Type Description body ResourceAccessReview schema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ResourceAccessReview schema 201 - Created ResourceAccessReview schema 202 - Accepted ResourceAccessReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authorization_apis/resourceaccessreview-authorization-openshift-io-v1 |
Appendix A. ApiBuilder Methods | Appendix A. ApiBuilder Methods The following table outlines the key methods available to the ApiBuilder class used in V3 of the Java software development kit. Table A.1. ApiBuilder Methods Method Argument Type Description user String The name of the user with which to connect to the Manager. You must specify both the user name and domain, such as admin@internal . This method must be used together with the password method. password String The password of the user with which to connect to the Manager. sessionID String The identifier of a session with which to connect to the Manager. If you have already authenticated with the Manager and a session is available, you can specify this argument instead of specifying a user name and password. requestTimeout Integer The timeout, in seconds, to wait for responses to requests. If a request takes longer than this value to respond, the request is cancelled, and an exception is thrown. This argument is optional. sessionTimeout Integer The timeout, in minutes, after which an active session is destroyed if no requests are made to the Manager. This argument is optional. persistentAuth Boolean Enables or disables persistent authentication using cookies. This option is enabled by default, so this method is only required to disable this option. noHostVerification Boolean Enables or disables verification of the host name in the SSL certificate presented by the server where the Manager is hosted. By default, the identity of host names is verified, and the connection is rejected if the host name is not correct, so this method is only required to disable this option. keyStorePath String Specifies the location of a file containing the CA certificate used to verify the certificate presented by the server where the Manager is hosted. This method must be used together with the keyStorePassword method. keyStorePassword String The password used to access the keystore file specified in the keyStorePath method. filter Boolean Enables or disables filtering of objects based on the permissions of the user making the request. By default, this option is disabled, which allows any user to see all objects in the environment. This method is only required to restrict the objects in the environment to those visible to the user making the request. debug Boolean Enables or disables debug output. By default, this option is disabled. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/java_sdk_guide/apibuilder_methods |
7.202. sssd | 7.202. sssd 7.202.1. RHBA-2015:1448 - sssd bug fix and enhancement update Updated sssd packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The System Security Services Daemon (SSSD) provides a set of daemons to manage access to remote directories and authentication mechanisms. Note The sssd packages have been upgraded to upstream version 1.12.4, which provides a number of bug fixes and enhancements over the version. ( BZ#1168347 ) Bug Fixes BZ# 1168363 The "domains=" option for the pam_sss module BZ# 1088402 The UPN (User Principal Name) attribute to identify users and user logins BZ# 1036745 Password expiration warnings for non-password authentication BZ# 1168344 The ID views feature BZ# 1168377 Transferring the user shell attribute from an Active Directory (AD) server to an Identity Management (IdM) client BZ# 1098147 Updating cached entries out-of-band in the background BZ# 1161564 The ad_site option can be used to override the AD site discovered from DNS BZ# 1168357 A new Kerberos plug-in maps Kerberos principals to local SSSD user names BZ# 1168378 Groups for AD trusted users are displayed without logging in BZ# 1171782 The case_sensitive option accepts the "preserve" value. BZ# 1173198 The ldap_access_order option accepts the "ppolicy" value. BZ# 1187642 SSSD can use GPOs on an AD server BZ# 1123291 Applications leveraging identities from SSSD could terminate unexpectedly while invalidating the memory cache using the sss_cache utility. This bug has been fixed, and using sss_cache is safe. BZ# 1134942 SSSD properly recognizes Windows 2012R2 as an AD server and applies the correct AD-specific performance optimizations. BZ# 1139878 SSSD failed to connect to servers that only allowed authenticated connections to read the rootDSE entry, such as IBM Tivoli LDAP servers. SSSD now retries an authenticated connection after a non-authenticated connection fails while reading rootDSE. As a result, SSSD works as expected with these servers. BZ# 1170910 When the simple_allow_groups and simple_allow_users options contained non-existent and existing entries, SSSD denied access to the existing users or groups. Now, SSSD logs and skips the non-existent entries and correctly handles the existing ones. BZ# 1173738 , BZ# 1194367 This update fixes bugs that caused SSSD to terminate unexpectedly due to memory errors or when trying to access callback data. BZ# 1135838 , BZ# 1172865 The sssd-ldap(5) and sssd.conf(5) man pages have been modified. BZ# 1201847 SSSD downloaded an unnecessary amount of data when obtaining information about groups from an AD provider when using POSIX attributes on the server. With this update, SSSD downloads only the information about the group object, not the contents of the group. BZ# 1205382 SSSD did not properly handle the "objectGUID" AD LDAP attribute. Now, SSSD considers "objectGUID" a binary value as expected, and the attribute is stored correctly. BZ# 1215765 If a multi-process program requested the initgroups data immediately after SSSD startup, before the SSSD cache was ready, the NSS responder could incorrectly return an empty group list. With this update, the initgroups requests from a multi-process program with an empty cache work correctly, and the described problem no longer occurs. BZ# 1221358 Setups with "subdomains_provider=none" set for AD domains did not sometimes work as expected. Now, the ldap_idmap_default_domain_sid option value is used for the SSSD main domain, thus fixing the bug. Note that ldap_idmap_default_domain_sid must be set for SSSD to function correctly in this situation. Enhancement BZ# 1171378 SRV queries now honor the time to live (TTL) values from DNS. Users of sssd are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-sssd |
Deploying OpenShift Data Foundation on VMware vSphere | Deploying OpenShift Data Foundation on VMware vSphere Red Hat OpenShift Data Foundation 4.13 Instructions on deploying OpenShift Data Foundation using VMware vSphere infrastructure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on VMware vSphere clusters. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_on_vmware_vsphere/index |
Part I. Preparing the RHEL installation | Part I. Preparing the RHEL installation Before installing Red Hat Enterprise Linux (RHEL), ensure that your system meets the necessary hardware and architecture requirements. Additionally, you may want to optimize your installation experience by customizing the installation media or creating a bootable medium tailored to your environment. Registration of your RHEL system to Red Hat provides access to updates and support, which can enhance the system's stability and security. Special attention may also be needed for systems using UEFI Secure Boot, particularly when installing or booting RHEL beta releases. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/preparing-the-rhel-installation |
Chapter 5. TemplateInstance [template.openshift.io/v1] | Chapter 5. TemplateInstance [template.openshift.io/v1] Description TemplateInstance requests and records the instantiation of a Template. TemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object TemplateInstanceSpec describes the desired state of a TemplateInstance. status object TemplateInstanceStatus describes the current state of a TemplateInstance. 5.1.1. .spec Description TemplateInstanceSpec describes the desired state of a TemplateInstance. Type object Required template Property Type Description requester object TemplateInstanceRequester holds the identity of an agent requesting a template instantiation. secret LocalObjectReference secret is a reference to a Secret object containing the necessary template parameters. template object Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 5.1.2. .spec.requester Description TemplateInstanceRequester holds the identity of an agent requesting a template instantiation. Type object Property Type Description extra object extra holds additional information provided by the authenticator. extra{} array (string) groups array (string) groups represent the groups this user is a part of. uid string uid is a unique value that identifies this user across time; if this user is deleted and another user by the same name is added, they will have different UIDs. username string username uniquely identifies this user among all active users. 5.1.3. .spec.requester.extra Description extra holds additional information provided by the authenticator. Type object 5.1.4. .spec.template Description Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required objects Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds labels object (string) labels is a optional set of labels that are applied to every object during the Template to Config transformation. message string message is an optional instructional message that will be displayed when this template is instantiated. This field should inform the user how to utilize the newly created resources. Parameter substitution will be performed on the message before being displayed so that generated credentials and other parameters can be included in the output. metadata ObjectMeta objects array (RawExtension) objects is an array of resources to include in this template. If a namespace value is hardcoded in the object, it will be removed during template instantiation, however if the namespace value is, or contains, a USD{PARAMETER_REFERENCE}, the resolved value after parameter substitution will be respected and the object will be created in that namespace. parameters array parameters is an optional array of Parameters used during the Template to Config transformation. parameters[] object Parameter defines a name/value variable that is to be processed during the Template to Config transformation. 5.1.5. .spec.template.parameters Description parameters is an optional array of Parameters used during the Template to Config transformation. Type array 5.1.6. .spec.template.parameters[] Description Parameter defines a name/value variable that is to be processed during the Template to Config transformation. Type object Required name Property Type Description description string Description of a parameter. Optional. displayName string Optional: The name that will show in UI instead of parameter 'Name' from string From is an input value for the generator. Optional. generate string generate specifies the generator to be used to generate random string from an input value specified by From field. The result string is stored into Value field. If empty, no generator is being used, leaving the result Value untouched. Optional. The only supported generator is "expression", which accepts a "from" value in the form of a simple regular expression containing the range expression "[a-zA-Z0-9]", and the length expression "a{length}". Examples: from | value ----------------------------- "test[0-9]{1}x" | "test7x" "[0-1]{8}" | "01001100" "0x[A-F0-9]{4}" | "0xB3AF" "[a-zA-Z0-9]{8}" | "hW4yQU5i" name string Name must be set and it can be referenced in Template Items using USD{PARAMETER_NAME}. Required. required boolean Optional: Indicates the parameter must have a value. Defaults to false. value string Value holds the Parameter data. If specified, the generator will be ignored. The value replaces all occurrences of the Parameter USD{Name} expression during the Template to Config transformation. Optional. 5.1.7. .status Description TemplateInstanceStatus describes the current state of a TemplateInstance. Type object Property Type Description conditions array conditions represent the latest available observations of a TemplateInstance's current state. conditions[] object TemplateInstanceCondition contains condition information for a TemplateInstance. objects array Objects references the objects created by the TemplateInstance. objects[] object TemplateInstanceObject references an object created by a TemplateInstance. 5.1.8. .status.conditions Description conditions represent the latest available observations of a TemplateInstance's current state. Type array 5.1.9. .status.conditions[] Description TemplateInstanceCondition contains condition information for a TemplateInstance. Type object Required type status lastTransitionTime reason message Property Type Description lastTransitionTime Time LastTransitionTime is the last time a condition status transitioned from one state to another. message string Message is a human readable description of the details of the last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False or Unknown. type string Type of the condition, currently Ready or InstantiateFailure. 5.1.10. .status.objects Description Objects references the objects created by the TemplateInstance. Type array 5.1.11. .status.objects[] Description TemplateInstanceObject references an object created by a TemplateInstance. Type object Property Type Description ref ObjectReference ref is a reference to the created object. When used under .spec, only name and namespace are used; these can contain references to parameters which will be substituted following the usual rules. 5.2. API endpoints The following API endpoints are available: /apis/template.openshift.io/v1/templateinstances GET : list or watch objects of kind TemplateInstance /apis/template.openshift.io/v1/watch/templateinstances GET : watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances DELETE : delete collection of TemplateInstance GET : list or watch objects of kind TemplateInstance POST : create a TemplateInstance /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances GET : watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name} DELETE : delete a TemplateInstance GET : read the specified TemplateInstance PATCH : partially update the specified TemplateInstance PUT : replace the specified TemplateInstance /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances/{name} GET : watch changes to an object of kind TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name}/status GET : read status of the specified TemplateInstance PATCH : partially update status of the specified TemplateInstance PUT : replace status of the specified TemplateInstance 5.2.1. /apis/template.openshift.io/v1/templateinstances Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind TemplateInstance Table 5.2. HTTP responses HTTP code Reponse body 200 - OK TemplateInstanceList schema 401 - Unauthorized Empty 5.2.2. /apis/template.openshift.io/v1/watch/templateinstances Table 5.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. Table 5.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances Table 5.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of TemplateInstance Table 5.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.8. Body parameters Parameter Type Description body DeleteOptions schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind TemplateInstance Table 5.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK TemplateInstanceList schema 401 - Unauthorized Empty HTTP method POST Description create a TemplateInstance Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body TemplateInstance schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 202 - Accepted TemplateInstance schema 401 - Unauthorized Empty 5.2.4. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances Table 5.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. Table 5.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name} Table 5.18. Global path parameters Parameter Type Description name string name of the TemplateInstance namespace string object name and auth scope, such as for teams and projects Table 5.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a TemplateInstance Table 5.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.21. Body parameters Parameter Type Description body DeleteOptions schema Table 5.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified TemplateInstance Table 5.23. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified TemplateInstance Table 5.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.25. Body parameters Parameter Type Description body Patch schema Table 5.26. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified TemplateInstance Table 5.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.28. Body parameters Parameter Type Description body TemplateInstance schema Table 5.29. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty 5.2.6. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances/{name} Table 5.30. Global path parameters Parameter Type Description name string name of the TemplateInstance namespace string object name and auth scope, such as for teams and projects Table 5.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.7. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name}/status Table 5.33. Global path parameters Parameter Type Description name string name of the TemplateInstance namespace string object name and auth scope, such as for teams and projects Table 5.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified TemplateInstance Table 5.35. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified TemplateInstance Table 5.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.37. Body parameters Parameter Type Description body Patch schema Table 5.38. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified TemplateInstance Table 5.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.40. Body parameters Parameter Type Description body TemplateInstance schema Table 5.41. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/template_apis/templateinstance-template-openshift-io-v1 |
11.4. Durations | 11.4. Durations Durations are used to calculate a value for end when one is not supplied to in_range operations. They contain the same fields as date_spec objects but without the limitations (ie. you can have a duration of 19 months). Like date_specs , any field not supplied is ignored. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/_durations |
Chapter 2. The Admin Tab | Chapter 2. The Admin Tab The Admin tab provides an interface where administrative users can view usage and manage instances, volumes, flavors, images, projects, users, services, and quotas. Note The Admin tab displays in the main window only if you have logged in as a user with administrative privileges. The following options are available in the Admin tab: Table 2.1. System Panel Parameter Name Description Overview View basic reports. Resource Usage Use the following tabs to view the following usages: Usage Report - View the usage report. Stats - View the statistics of all resources. Hypervisors View the hypervisor summary. Host Aggregates View, create, and edit host aggregates. View the list of availability zones. Instances View, pause, resume, suspend, migrate, soft or hard reboot, and delete running instances that belong to users of some, but not all, projects. Also, view the log for an instance or access an instance using the console. Volumes View, create, edit, and delete volumes, and volume types. Flavors View, create, edit, view extra specifications for, and delete flavors. Flavors are the virtual hardware templates in OpenStack. Images View, create, edit properties for, and delete custom images. Networks View, create, edit properties for, and delete networks. Routers View, create, edit properties for, and delete routers. Floating IPs View allocated floating IP addresses for all projects. Defaults View and edit the default quotas (maximum limits) for resources in the environment. Metadata Definitions Import, view, and edit metadata definition namespaces, and associate the metadata definitions with specific resource types. System Information Contains the following tabs: Services - View a list of the services. Compute Services - View a list of all Compute services. Network Agents - View the network agents. Block Storage Services - View a list of all Block Storage services. Orchestration Services - View a list of all Orchestration services. 2.1. View Allocated Floating IP Addresses You can use the Floating IPs panel to view a list of allocated floating IP addresses. This information can also be accessed using the command line: | [
"nova list --all-tenants"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/introduction_to_the_openstack_dashboard/the_admin_tab |
1.5. Common Exploits and Attacks | 1.5. Common Exploits and Attacks Table 1.1, "Common Exploits" details some of the most common exploits and entry points used by intruders to access organizational network resources. Key to these common exploits are the explanations of how they are performed and how administrators can properly safeguard their network against such attacks. Table 1.1. Common Exploits Exploit Description Notes Null or Default Passwords Leaving administrative passwords blank or using a default password set by the product vendor. This is most common in hardware such as routers and firewalls, but some services that run on Linux can contain default administrator passwords as well (though Red Hat Enterprise Linux 7 does not ship with them). Commonly associated with networking hardware such as routers, firewalls, VPNs, and network attached storage (NAS) appliances. Common in many legacy operating systems, especially those that bundle services (such as UNIX and Windows.) Administrators sometimes create privileged user accounts in a rush and leave the password null, creating a perfect entry point for malicious users who discover the account. Default Shared Keys Secure services sometimes package default security keys for development or evaluation testing purposes. If these keys are left unchanged and are placed in a production environment on the Internet, all users with the same default keys have access to that shared-key resource, and any sensitive information that it contains. Most common in wireless access points and preconfigured secure server appliances. IP Spoofing A remote machine acts as a node on your local network, finds vulnerabilities with your servers, and installs a backdoor program or Trojan horse to gain control over your network resources. Spoofing is quite difficult as it involves the attacker predicting TCP/IP sequence numbers to coordinate a connection to target systems, but several tools are available to assist crackers in performing such a vulnerability. Depends on target system running services (such as rsh , telnet , FTP and others) that use source-based authentication techniques, which are not recommended when compared to PKI or other forms of encrypted authentication used in ssh or SSL/TLS. Eavesdropping Collecting data that passes between two active nodes on a network by eavesdropping on the connection between the two nodes. This type of attack works mostly with plain text transmission protocols such as Telnet, FTP, and HTTP transfers. Remote attacker must have access to a compromised system on a LAN in order to perform such an attack; usually the cracker has used an active attack (such as IP spoofing or man-in-the-middle) to compromise a system on the LAN. Preventative measures include services with cryptographic key exchange, one-time passwords, or encrypted authentication to prevent password snooping; strong encryption during transmission is also advised. Service Vulnerabilities An attacker finds a flaw or loophole in a service run over the Internet; through this vulnerability, the attacker compromises the entire system and any data that it may hold, and could possibly compromise other systems on the network. HTTP-based services such as CGI are vulnerable to remote command execution and even interactive shell access. Even if the HTTP service runs as a non-privileged user such as "nobody", information such as configuration files and network maps can be read, or the attacker can start a denial of service attack which drains system resources or renders it unavailable to other users. Services sometimes can have vulnerabilities that go unnoticed during development and testing; these vulnerabilities (such as buffer overflows , where attackers crash a service using arbitrary values that fill the memory buffer of an application, giving the attacker an interactive command prompt from which they may execute arbitrary commands) can give complete administrative control to an attacker. Administrators should make sure that services do not run as the root user, and should stay vigilant of patches and errata updates for applications from vendors or security organizations such as CERT and CVE. Application Vulnerabilities Attackers find faults in desktop and workstation applications (such as email clients) and execute arbitrary code, implant Trojan horses for future compromise, or crash systems. Further exploitation can occur if the compromised workstation has administrative privileges on the rest of the network. Workstations and desktops are more prone to exploitation as workers do not have the expertise or experience to prevent or detect a compromise; it is imperative to inform individuals of the risks they are taking when they install unauthorized software or open unsolicited email attachments. Safeguards can be implemented such that email client software does not automatically open or execute attachments. Additionally, the automatic update of workstation software using Red Hat Network; or other system management services can alleviate the burdens of multi-seat security deployments. Denial of Service (DoS) Attacks Attacker or group of attackers coordinate against an organization's network or server resources by sending unauthorized packets to the target host (either server, router, or workstation). This forces the resource to become unavailable to legitimate users. The most reported DoS case in the US occurred in 2000. Several highly-trafficked commercial and government sites were rendered unavailable by a coordinated ping flood attack using several compromised systems with high bandwidth connections acting as zombies , or redirected broadcast nodes. Source packets are usually forged (as well as rebroadcast), making investigation as to the true source of the attack difficult. Advances in ingress filtering (IETF rfc2267) using iptables and Network Intrusion Detection Systems such as snort assist administrators in tracking down and preventing distributed DoS attacks. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Common_Exploits_and_Attacks |
A.4. Install and Configure HAProxy | A.4. Install and Configure HAProxy Perform the following procedure on your two HAProxy nodes: Install haproxy . Configure haproxy for SELinux and HTTP. Add the following lines: As root, assign the correct SELinux context and file permissions to the haproxy-http.xml file. If you intend to use HTTPS, configure haproxy for SELinux and HTTPS. Add the following lines: As root, assign the correct SELinux context and file permissions to the haproxy-https.xml file. If you intend to use HTTPS, generate keys for SSL. If you do not have a certificate, you may use a self-signed certificate. For information on generating keys and on self-signed certificates, see the Red Hat Enterprise Linux System Administrator's Guide . Finally, put the certificate and key into a PEM file. Configure HAProxy. The global and defaults sections of haproxy.cfg may remain unchanged. After the defaults sections, you will need to configure frontend and backend sections, as in the following example: Enable/start haproxy | [
"yum install haproxy",
"vim /etc/firewalld/services/haproxy-http.xml",
"<?xml version=\"1.0\" encoding=\"utf-8\"?> <service> <short>HAProxy-HTTP</short> <description>HAProxy load-balancer</description> <port protocol=\"tcp\" port=\"80\"/> </service>",
"cd /etc/firewalld/services restorecon haproxy-http.xml chmod 640 haproxy-http.xml",
"vim /etc/firewalld/services/haproxy-https.xml",
"<?xml version=\"1.0\" encoding=\"utf-8\"?> <service> <short>HAProxy-HTTPS</short> <description>HAProxy load-balancer</description> <port protocol=\"tcp\" port=\"443\"/> </service>",
"cd /etc/firewalld/services restorecon haproxy-https.xml chmod 640 haproxy-https.xml",
"cat example.com.crt example.com.key > example.com.pem cp example.com.pem /etc/ssl/private/",
"vim /etc/haproxy/haproxy.cfg",
"frontend http_web *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind <insert vip ipv4>:443 ssl crt /etc/ssl/private/example.com.pem default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:80 check server rgw2 10.0.0.80:80 check",
"systemctl enable haproxy systemctl start haproxy"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/install_haproxy_example1 |
Pipelines | Pipelines OpenShift Container Platform 4.18 Contains information about Pipelines for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/pipelines/index |
Chapter 5. Monitoring your application | Chapter 5. Monitoring your application This section contains information about monitoring your Eclipse Vert.x-based application running on OpenShift. 5.1. Accessing JVM metrics for your application on OpenShift 5.1.1. Accessing JVM metrics using Jolokia on OpenShift Jolokia is a built-in lightweight solution for accessing JMX (Java Management Extension) metrics over HTTP on OpenShift. Jolokia allows you to access CPU, storage, and memory usage data collected by JMX over an HTTP bridge. Jolokia uses a REST interface and JSON-formatted message payloads. It is suitable for monitoring cloud applications thanks to its comparably high speed and low resource requirements. For Java-based applications, the OpenShift Web console provides the integrated hawt.io console that collects and displays all relevant metrics output by the JVM running your application. Prerequistes the oc client authenticated a Java-based application container running in a project on OpenShift latest JDK 1.8.0 image Procedure List the deployment configurations of the pods inside your project and select the one that corresponds to your application. oc get dc Open the YAML deployment template of the pod running your application for editing. oc edit dc/MY_APP_NAME Add the following entry to the ports section of the template and save your changes: ... spec: ... ports: - containerPort: 8778 name: jolokia protocol: TCP ... ... Redeploy the pod running your application. oc rollout latest dc/MY_APP_NAME The pod is redeployed with the updated deployment configuration and exposes the port 8778 . Log into the OpenShift Web console. In the sidebar, navigate to Applications > Pods , and click on the name of the pod running your application. In the pod details screen, click Open Java Console to access the hawt.io console. Additional resources hawt.io documentation 5.2. Exposing application metrics using Prometheus with Eclipse Vert.x Prometheus connects to a monitored application to collect data; the application does not send metrics to a server. Prerequisites Prometheus server running on your cluster Procedure Include the vertx-micrometer and vertx-web dependencies in the pom.xml file of your application: pom.xml <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-micrometer-metrics</artifactId> </dependency> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-web</artifactId> </dependency> Starting with version 3.5.4, exposing metrics for Prometheus requires that you configure the Eclipse Vert.x options in a custom Launcher class. In your custom Launcher class, override the beforeStartingVertx and afterStartingVertx methods to configure the metrics engine, for example: Example CustomLauncher.java file package org.acme; import io.micrometer.core.instrument.Meter; import io.micrometer.core.instrument.config.MeterFilter; import io.micrometer.core.instrument.distribution.DistributionStatisticConfig; import io.micrometer.prometheus.PrometheusMeterRegistry; import io.vertx.core.Vertx; import io.vertx.core.VertxOptions; import io.vertx.core.http.HttpServerOptions; import io.vertx.micrometer.MicrometerMetricsOptions; import io.vertx.micrometer.VertxPrometheusOptions; import io.vertx.micrometer.backends.BackendRegistries; public class CustomLauncher extends Launcher { @Override public void beforeStartingVertx(VertxOptions options) { options.setMetricsOptions(new MicrometerMetricsOptions() .setPrometheusOptions(new VertxPrometheusOptions().setEnabled(true) .setStartEmbeddedServer(true) .setEmbeddedServerOptions(new HttpServerOptions().setPort(8081)) .setEmbeddedServerEndpoint("/metrics")) .setEnabled(true)); } @Override public void afterStartingVertx(Vertx vertx) { PrometheusMeterRegistry registry = (PrometheusMeterRegistry) BackendRegistries.getDefaultNow(); registry.config().meterFilter( new MeterFilter() { @Override public DistributionStatisticConfig configure(Meter.Id id, DistributionStatisticConfig config) { return DistributionStatisticConfig.builder() .percentilesHistogram(true) .build() .merge(config); } }); } Create a custom Verticle class and override the start method to collect metrics. For example, measure the execution time using the Timer class: Example CustomVertxApp.java file package org.acme; import io.micrometer.core.instrument.MeterRegistry; import io.micrometer.core.instrument.Timer; import io.vertx.core.AbstractVerticle; import io.vertx.core.Vertx; import io.vertx.core.VertxOptions; import io.vertx.core.http.HttpServerOptions; import io.vertx.micrometer.backends.BackendRegistries; public class CustomVertxApp extends AbstractVerticle { @Override public void start() { MeterRegistry registry = BackendRegistries.getDefaultNow(); Timer timer = Timer .builder("my.timer") .description("a description of what this timer does") .register(registry); vertx.setPeriodic(1000, l -> { timer.record(() -> { // Do something }); }); } } Set the <vertx.verticle> and <vertx.launcher> properties in the pom.xml file of your application to point to your custom classes: <properties> ... <vertx.verticle>org.acme.CustomVertxApp</vertx.verticle> <vertx.launcher>org.acme.CustomLauncher</vertx.launcher> ... </properties> Launch your application: USD mvn vertx:run Invoke the traced endpoint several times: USD curl http://localhost:8080/ Hello Wait at least 15 seconds for collection to occur, and see the metrics in Prometheus UI: Open the Prometheus UI at http://localhost:9090/ and type hello into the Expression box. From the suggestions, select for example application:hello_count and click Execute . In the table that is displayed, you can see how many times the resource method was invoked. Alternatively, select application:hello_time_mean_seconds to see the mean time of all the invocations. Note that all metrics you created are prefixed with application: . There are other metrics, automatically exposed by Eclipse Vert.x as the Eclipse MicroProfile Metrics specification requires. Those metrics are prefixed with base: and vendor: and expose information about the JVM in which the application runs. Additional resources For additional information about using Micrometer metrics with Eclipse Vert.x, see Eclipse Vert.x} Micrometer Metrics . | [
"get dc",
"NAME REVISION DESIRED CURRENT TRIGGERED BY MY_APP_NAME 2 1 1 config,image(my-app:6)",
"edit dc/MY_APP_NAME",
"spec: ports: - containerPort: 8778 name: jolokia protocol: TCP",
"rollout latest dc/MY_APP_NAME",
"<dependency> <groupId>io.vertx</groupId> <artifactId>vertx-micrometer-metrics</artifactId> </dependency> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-web</artifactId> </dependency>",
"package org.acme; import io.micrometer.core.instrument.Meter; import io.micrometer.core.instrument.config.MeterFilter; import io.micrometer.core.instrument.distribution.DistributionStatisticConfig; import io.micrometer.prometheus.PrometheusMeterRegistry; import io.vertx.core.Vertx; import io.vertx.core.VertxOptions; import io.vertx.core.http.HttpServerOptions; import io.vertx.micrometer.MicrometerMetricsOptions; import io.vertx.micrometer.VertxPrometheusOptions; import io.vertx.micrometer.backends.BackendRegistries; public class CustomLauncher extends Launcher { @Override public void beforeStartingVertx(VertxOptions options) { options.setMetricsOptions(new MicrometerMetricsOptions() .setPrometheusOptions(new VertxPrometheusOptions().setEnabled(true) .setStartEmbeddedServer(true) .setEmbeddedServerOptions(new HttpServerOptions().setPort(8081)) .setEmbeddedServerEndpoint(\"/metrics\")) .setEnabled(true)); } @Override public void afterStartingVertx(Vertx vertx) { PrometheusMeterRegistry registry = (PrometheusMeterRegistry) BackendRegistries.getDefaultNow(); registry.config().meterFilter( new MeterFilter() { @Override public DistributionStatisticConfig configure(Meter.Id id, DistributionStatisticConfig config) { return DistributionStatisticConfig.builder() .percentilesHistogram(true) .build() .merge(config); } }); }",
"package org.acme; import io.micrometer.core.instrument.MeterRegistry; import io.micrometer.core.instrument.Timer; import io.vertx.core.AbstractVerticle; import io.vertx.core.Vertx; import io.vertx.core.VertxOptions; import io.vertx.core.http.HttpServerOptions; import io.vertx.micrometer.backends.BackendRegistries; public class CustomVertxApp extends AbstractVerticle { @Override public void start() { MeterRegistry registry = BackendRegistries.getDefaultNow(); Timer timer = Timer .builder(\"my.timer\") .description(\"a description of what this timer does\") .register(registry); vertx.setPeriodic(1000, l -> { timer.record(() -> { // Do something }); }); } }",
"<properties> <vertx.verticle>org.acme.CustomVertxApp</vertx.verticle> <vertx.launcher>org.acme.CustomLauncher</vertx.launcher> </properties>",
"mvn vertx:run",
"curl http://localhost:8080/ Hello"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_runtime_guide/monitoring-vertx-based-application-on-openshift_vertx |
Appendix B. Revision history | Appendix B. Revision history 0.4-0 Thu Jan 30 2025, Gabriela Fialova ( [email protected] ) Added an Known Issue JIRA:RHELDOCS-19603 (IdM SSSD) 0.3-9 Thu Dec 12 2024, Brian Angelica ( [email protected] ) Updated Tech Preview in BZ#1957316 . 0.3-8 Wed Dec 11 2024, Brian Angelica ( [email protected] ) Updated Overview with new link (Conversion from a different Linux distribution to RHEL). 0.3-7 Wed Dec 4 2024, Gabriela Fialova ( [email protected] ) Updated the Customer Portal labs section Updated the Installation section 0.3-6 Thu May 23 2024, Brian Angelica ( [email protected] ) Updated Enhancements in JIRA:RHELDOCS-18188 (Networking). 0.3-5 Thu May 9 2024, Brian Angelica ( [email protected] ) Updated Tech Preview in BZ#1690207 . 0.3-4 Thu May 9 2024, Gabriela Fialova ( [email protected] ) Updated a known issue BZ#1730502 (Storage). 0.3-3 Thu Feb 29 2024, Lucie Varakova ( [email protected] ) Added a deprecated functionality JIRA:RHELDOCS-17641 (Networking). 0.3-2 Fri Nov 10 2023, Gabriela Fialova ( [email protected] ) Updated the module on Providing Feedback on RHEL Documentation. 0.3-1 Tue Nov 7 2023, Gabriela Fialova ( [email protected] ) Fix broken links. 0.3-0 Fri Oct 13 2023, Gabriela Fialova ( [email protected] ) Added a Tech Preview JIRA:RHELDOCS-16861 (Containers). 0.2-9 September 8 2023, Marc Muehlfeld ( [email protected] ) Added a deprecated functionality release note JIRA:RHELDOCS-16612 (Samba). 0.2-8 Fri Aug 11 2023, Lucie Varakova ( [email protected] ) Added a known issue BZ#2227218 (Installer and image creation). 0.2-7 Thu Apr 27 2023, Gabriela Fialova ( [email protected] ) Added a known issue JIRA:RHELPLAN-155168 (Identity Management). 0.2-6 Thu Apr 13 2023, Gabriela Fialova ( [email protected] ) Fixed 2 broken links in DFs and KIs. 0.2-5 Thu Dec 08, 2022, Marc Muehlfeld ( [email protected] ) Added a known issue BZ#2132754 (Networking). 0.2-4 Thu Jun 09, Lucie Varakova ( [email protected] ) Added a new feature BZ#1996076 (Identity Management). 0.2-3 Fri Apr 29, Lenka Spackova ( [email protected] ) Updated Deprecated functionality introduction. Fixed typo in BZ#1605216 . Fixed broken links. 0.2-2 Thu Mar 24 2022, Jaroslav Klech ( [email protected] ) Added a bug fix BZ#1947839 (Kernel). 0.2-1 Mon Mar 21 2022, Jaroslav Klech ( [email protected] ) Removed a known issue (Kernel). 0.2-0 Fri Feb 04 2022, Jaroslav Klech ( [email protected] ) Added a deprecated functionality BZ#1871863 (Hardware enablement). Updated Deprecated packages . Added deprecated functionality BZ#1794513 (Filesystems and storage). 0.1-9 Thu Jan 20 2022, Lucie Manaskova ( [email protected] ) Added a known issue BZ#2028361 (Installer and image creation). 0.1-8 Thu Dec 23 2021, Lenka Spackova ( [email protected] ) Added information about the Soft-RoCE driver, rdma_rxe , to Technology Previews BZ#1605216 and Deprecated Functionality BZ#1878207 (Kernel). 0.1-7 Wed Dec 22 29 2021, Lenka Spackova ( [email protected] ) Added an enhancement BZ#2005431 (Security). Updated Deprecated packages . 0.1-6 Thu Oct 29 2021, Jaroslav Klech ( [email protected] ) Updated the fw_devlink parameter (Important changes to external kernel parameters). 0.1-5 Thu Oct 07 2021, Lenka Spackova ( [email protected] ) Updated the known issue BZ#1942330 (Dynamic programming languages, web and database servers). 0.1-4 Tue Oct 05 2021, Lucie Manaskova ( [email protected] ) Added deprecated functionality BZ#1999620 (Shells and command-line tools). 0.1-3 Fri Sep 17 2021, Lucie Manaskova ( [email protected] ) Added known issue BZ#1987087 (Installer). 0.1-2 Tue Sep 07 2021, Lucie Manaskova ( [email protected] ) Updated the known issue BZ#1961722 (Virtualization). 0.1-1 Fri Sep 03 2021, Lenka Spackova ( [email protected] ) Updated the known issue BZ#1995558 (Virtualization). 0.1-0 Mon Aug 30 2021, Lenka Spackova ( [email protected] ) Added a known issue BZ#1995558 (Virtualization). Added a bug fix BZ#1940854 (Containers). 0.0-9 Fri Aug 20 2021, Lucie Manaskova ( [email protected] ) Added the Package management with YUM/DNF to the Distribution chapter. Updated the text of BZ#1708456 (Kernel). Added new feature BZ#1888214 (File systems and storage). Added a known issue BZ#1991659 (Compilers and development tools). Added a Technology Preview feature JIRA:RHELPLAN-58596 (Identity Management). 0.0-8 Tue Aug 10 2021, Lucie Manaskova ( [email protected] ) Updated new feature BZ#1905398 (RHEL in cloud environents). 0.0-7 Tue Aug 03 2021, Lucie Manaskova ( [email protected] ) Added known issue BZ#1935722 (Installer and image creation). Added known issue BZ#1961722 (Virtualization). 0.0-6 Fri Jul 23 2021, Lucie Manaskova ( [email protected] ) Added known issue BZ#1924230 (Security). Added known issue BZ#1957768 (Identity Management). 0.0-5 Fri Jul 16 2021, Lucie Manaskova ( [email protected] ) Added known issue BZ#1959020 (Virtualization). Added known issue BZ#1963981 (RHEL in cloud environments). Added new feature BZ#1340463 (Identity Management). Removed invalid release note and its revision history entry. 0.0-4 Wed Jun 23 2021, Lucie Manaskova ( [email protected] ) Added new feature BZ#1966838 (Supportability). Updated Deprecated devices with sfc . Other small improvements. 0.0-3 Wed Jun 16 2021, Lucie Manaskova ( [email protected] ) Added deprecated functionality BZ#1929173 (Networking). Added deprecated functionality BZ#1920624 (Compilers and development tools). Added new feature JIRA:RHELPLAN-63081 (Identity Management). Added known issue BZ#1949743 (File systems and storage). Added know inssue BZ#1332758 (Virtualization). Added known issue BZ#1957532 (RHEL in cloud environments). Other small improvements. 0.0-2 Fri Jun 04 2021, Lenka Spackova ( [email protected] ) Fixed the BZ#1849815 note. Various formatting improvements. 0.0-1 Wed May 18 2021, Lucie Manaskova ( [email protected] ) Release of the Red Hat Enterprise Linux 8.4 Release Notes. 0.0-0 Wed Mar 31 2021, Lucie Manaskova ( [email protected] ) Release of the Red Hat Enterprise Linux 8.4 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/revision_history |
Chapter 1. System requirements and supported architectures | Chapter 1. System requirements and supported architectures Red Hat Enterprise Linux 8 delivers a stable, secure, consistent foundation across hybrid cloud deployments with the tools needed to deliver workloads faster with less effort. You can deploy RHEL as a guest on supported hypervisors and Cloud provider environments as well as on physical infrastructure, so your applications can take advantage of innovations in the leading hardware architecture platforms. Review the guidelines provided for system, hardware, security, memory, and RAID before installing. If you want to use your system as a virtualization host, review the necessary hardware requirements for virtualization . Red Hat Enterprise Linux supports the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z architectures 1.1. Supported installation targets An installation target is a storage device that stores Red Hat Enterprise Linux and boots the system. Red Hat Enterprise Linux supports the following installation targets for IBMZ , IBM Power, AMD64, Intel 64, and 64-bit ARM systems: Storage connected by a standard internal interface, such as DASD, SCSI, SATA, or SAS BIOS/firmware RAID devices on the Intel64, AMD64 and arm64 architectures NVDIMM devices in sector mode on the Intel64 and AMD64 architectures, supported by the nd_pmem driver. Storage connected via Fibre Channel Host Bus Adapters, such as DASDs (IBM Z architecture only) and SCSI LUNs, including multipath devices. Some might require vendor-provided drivers. Xen block devices on Intel processors in Xen virtual machines. VirtIO block devices on Intel processors in KVM virtual machines. Red Hat does not support installation to USB drives or SD memory cards. For information about support for third-party virtualization technologies, see the Red Hat Hardware Compatibility List . 1.2. System specifications The Red Hat Enterprise Linux installation program automatically detects and installs your system's hardware, so you should not have to supply any specific system information. However, for certain Red Hat Enterprise Linux installation scenarios, it is recommended that you record system specifications for future reference. These scenarios include: Installing RHEL with a customized partition layout Record: The model numbers, sizes, types, and interfaces of the disks attached to the system. For example, Seagate ST3320613AS 320 GB on SATA0, Western Digital WD7500AAKS 750 GB on SATA1. Installing RHEL as an additional operating system on an existing system Record: Partitions used on the system. This information can include file system types, device node names, file system labels, and sizes, and allows you to identify specific partitions during the partitioning process. If one of the operating systems is a Unix operating system, Red Hat Enterprise Linux may report the device names differently. Additional information can be found by executing the equivalent of the mount command and the blkid command, and in the /etc/fstab file. If multiple operating systems are installed, the Red Hat Enterprise Linux installation program attempts to automatically detect them, and to configure boot loader to boot them. You can manually configure additional operating systems if they are not detected automatically. Installing RHEL from an image on a local disk Record: The disk and directory that holds the image. Installing RHEL from a network location If the network has to be configured manually, that is, DHCP is not used. Record: IP address Netmask Gateway IP address Server IP addresses, if required Contact your network administrator if you need assistance with networking requirements. Installing RHEL on an iSCSI target Record: The location of the iSCSI target. Depending on your network, you may need a CHAP user name and password, and a reverse CHAP user name and password. Installing RHEL if the system is part of a domain Verify that the domain name is supplied by the DHCP server. If it is not, enter the domain name during installation. 1.3. Disk and memory requirements If several operating systems are installed, it is important that you verify that the allocated disk space is separate from the disk space required by Red Hat Enterprise Linux. In some cases, it is important to dedicate specific partitions to Red Hat Enterprise Linux, for example, for AMD64, Intel 64, and 64-bit ARM, at least two partitions ( / and swap ) must be dedicated to RHEL and for IBM Power Systems servers, at least three partitions ( / , swap , and a PReP boot partition) must be dedicated to RHEL. Additionally, you must have a minimum of 10 GiB of available disk space. To install Red Hat Enterprise Linux, you must have a minimum of 10 GiB of space in either unpartitioned disk space or in partitions that can be deleted. For more information, see Partitioning reference . Table 1.1. Minimum RAM requirements Installation type Minimum RAM Local media installation (USB, DVD) 1.5 GiB for aarch64, IBM Z and x86_64 architectures 3 GiB for ppc64le architecture NFS network installation 1.5 GiB for aarch64, IBM Z and x86_64 architectures 3 GiB for ppc64le architecture HTTP, HTTPS or FTP network installation 3 GiB for IBM Z and x86_64 architectures 4 GiB for aarch64 and ppc64le architectures It is possible to complete the installation with less memory than the minimum requirements. The exact requirements depend on your environment and installation path. Test various configurations to determine the minimum required RAM for your environment. Installing Red Hat Enterprise Linux using a Kickstart file has the same minimum RAM requirements as a standard installation. However, additional RAM may be required if your Kickstart file includes commands that require additional memory, or write data to the RAM disk. For more information, see Automatically installing RHEL . 1.4. Graphics display resolution requirements Your system must have the following minimum resolution to ensure a smooth and error-free installation of Red Hat Enterprise Linux. Table 1.2. Display resolution Product version Resolution Red Hat Enterprise Linux 8 Minimum : 800 x 600 Recommended : 1026 x 768 1.5. UEFI Secure Boot and Beta release requirements If you plan to install a Beta release of Red Hat Enterprise Linux, on systems having UEFI Secure Boot enabled, then first disable the UEFI Secure Boot option and then begin the installation. UEFI Secure Boot requires that the operating system kernel is signed with a recognized private key, which the system's firmware verifies using the corresponding public key. For Red Hat Enterprise Linux Beta releases, the kernel is signed with a Red Hat Beta-specific public key, which the system fails to recognize by default. As a result, the system fails to even boot the installation media. Additional resources For information about installing RHEL on IBM, see IBM installation documentation Security hardening Composing a customized RHEL system image Red Hat ecosystem catalog RHEL technology capabilities and limits | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/system-requirements-and-supported-architectures_rhel-installer |
Chapter 6. Fixed Common Vulnerabilities and Exposures | Chapter 6. Fixed Common Vulnerabilities and Exposures This section details Common Vulnerabilities and Exposures (CVEs) fixed in the AMQ Broker 7.9 release. ENTMQBR-4071 - CVE-2020-13956 httpclient: apache-httpclient: incorrect handling of malformed authority component in request URIs ENTMQBR-4677 - CVE-2021-21290 netty: Information disclosure via the local system temporary directory ENTMQBR-4775 - CVE-2020-27223 jetty: request containing multiple Accept headers with a large number of "quality" parameters may lead to DoS ENTMQBR-4779 - CVE-2021-3425 broker: Red Hat AMQ Broker: discloses JDBC username and password in the application log file ENTMQBR-4795 - CVE-2021-21295 netty: possible request smuggling in HTTP/2 due missing validation ENTMQBR-4829 - CVE-2021-21409 netty: Request smuggling via content-length header ENTMQBR-4907 - CVE-2021-28163 jetty-server: jetty: Symlink directory exposes webapp directory contents ENTMQBR-4911 - CVE-2021-28165 jetty-server: jetty: Resource exhaustion when receiving an invalid large TLS frame ENTMQBR-4912 - CVE-2021-28164 jetty-server: jetty: Ambiguous paths can access WEB-INF ENTMQBR-4960 - CVE-2021-29425 commons-io: apache-commons-io: Limited path traversal in Apache Commons IO 2.2 to 2.6 ENTMQBR-5118 - CVE-2021-28169 jetty-server: jetty: requests to the ConcatServlet and WelcomeFilter are able to access protected resources within the WEB-INF directory ENTMQBR-5165 - CVE-2021-34428 jetty-server: jetty: SessionListener can prevent a session from being invalidated breaking logout ENTMQBR-5229 - CVE-2021-20289 resteasy-jaxrs: resteasy: Error message exposes endpoint class information ENTMQBR-5250 - CVE-2021-34429 jetty-server: jetty: crafted URIs allow bypassing security constraints ENTMQBR-5398 - CVE-2021-3763 AMQ Broker 7: Incorrect privilege in Management Console | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_red_hat_amq_broker_7.9/resolved_cves |
5.3.3. Creating Volume Groups in a Cluster | 5.3.3. Creating Volume Groups in a Cluster You create volume groups in a cluster environment with the vgcreate command, just as you create them on a single node. By default, volume groups created with CLVM on shared storage are visible to all computers that have access to the shared storage. It is possible, however, to create volume groups that are local, visible only to one node in the cluster, by using the -c n option of the vgcreate command. The following command, when executed in a cluster environment, creates a volume group that is local to the node from which the command was executed. The command creates a local volume named vg1 that contains physical volumes /dev/sdd1 and /dev/sde1 . You can change whether an existing volume group is local or clustered with the -c option of the vgchange command, which is described in Section 5.3.8, "Changing the Parameters of a Volume Group" . You can check whether an existing volume group is a clustered volume group with the vgs command, which displays the c attribute if the volume is clustered. The following command displays the attributes of the volume groups VolGroup00 and testvg1 . In this example, VolGroup00 is not clustered, while testvg1 is clustered, as indicated by the c attribute under the Attr heading. For more information on the vgs command, see Section 5.3.5, "Displaying Volume Groups" Section 5.8, "Customized Reporting for LVM" , and the vgs man page. | [
"vgcreate -c n vg1 /dev/sdd1 /dev/sde1",
"vgs VG #PV #LV #SN Attr VSize VFree VolGroup00 1 2 0 wz--n- 19.88G 0 testvg1 1 1 0 wz--nc 46.00G 8.00M"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_create_cluster |
Chapter 10. Upgrading Red Hat Ansible Automation Platform Operator on OpenShift Container Platform | Chapter 10. Upgrading Red Hat Ansible Automation Platform Operator on OpenShift Container Platform The Ansible Automation Platform Operator simplifies the installation, upgrade, and deployment of new Red Hat Ansible Automation Platform instances in your OpenShift Container Platform environment. 10.1. Overview You can use this document for help with upgrading Ansible Automation Platform 2.4 to 2.5 on Red Hat OpenShift Container Platform. This document applies to upgrades of Ansible Automation Platform 2.5 to later versions of 2.5. The Ansible Automation Platform Operator manages deployments, upgrades, backups, and restores of automation controller and automation hub. It also handles deployments of AnsibleJob and JobTemplate resources from the Ansible Automation Platform Resource Operator. Each operator version has default automation controller and automation hub versions. When the operator is upgraded, it also upgrades the automation controller and automation hub deployments it manages, unless overridden in the spec. OpenShift deployments of Ansible Automation Platform use the built-in Operator Lifecycle Management (OLM) functionality. For more information, see Operator Lifecycle Manager concepts and resources . OpenShift does this by using Subscription, CSV, InstallPlan, and OperatorGroup objects. Most users will not have to interact directly with these resources. They are created when the Ansible Automation Platform Operator is installed from OperatorHub and managed through the Subscriptions tab in the OpenShift console UI. For more information, refer to Accessing the web console . 10.2. Upgrade considerations If you are upgrading from version 2.4, continue to the Upgrading the Ansible Automation Platform Operator . If your OpenShift Container Platform version is not supported by the Red Hat Ansible Automation Platform version you are upgrading to, you must upgrade your OpenShift Container Platform cluster to a supported version first. Refer to the Red Hat Ansible Automation Platform Life Cycle to determine the OpenShift Container Platform version needed. For information about upgrading your cluster, refer to Updating clusters . 10.3. Prerequisites To upgrade to a newer version of Ansible Automation Platform Operator, you must: Ensure your system meets the system requirements detailed in the Operator topologies section of the Tested deployment models guide. Create AutomationControllerBackup and AutomationHubBackup objects. For help with this see Backup and recovery for operator environments Review the Release notes for the new Ansible Automation Platform version to which you are upgrading and any intermediate versions. Determine the type of upgrade you want to perform. See the Channel Upgrades section for more information. 10.4. Channel upgrades Upgrading to version 2.5 from Ansible Automation Platform 2.4 involves retrieving updates from a "channel". A channel refers to a location where you can access your update. It currently resides in the OpenShift console UI. 10.4.1. In-channel upgrades Most upgrades occur within a channel as follows: A new update becomes available in the marketplace, through the redhat-operator CatalogSource. The system automatically creates a new InstallPlan for your Ansible Automation Platform subscription. If set to Manual , the InstallPlan needs manual approval in the OpenShift UI. If set to Automatic , it upgrades as soon as the new version is available. Note Set a manual install strategy on your Ansible Automation Platform Operator subscription during installation or upgrade. You will be prompted to approve upgrades when available in your chosen update channel. Stable channels, like stable-2.5, are available for each X.Y release. A new subscription, CSV, and operator containers are created alongside the old ones. The old resources are cleaned up after a successful install. 10.4.2. Cross-channel upgrades Upgrading between X.Y channels is always manual and intentional. Stable channels for major and minor versions are in the Operator Catalog. Currently, only version 2.x is available, so there are few channels. It is recommended to stay on the latest minor version channel for the latest patches. If the subscription is set for manual upgrades, you must approve the upgrade in the UI. Then, the system upgrades the Operator to the latest version in that channel. Note It is recommended to set a manual install strategy on your Ansible Automation Platform Operator subscription during installation or upgrade. You will be prompted to approve upgrades when they become available in your chosen update channel. Stable channels, such as stable-2.5, are available for each X.Y release. The containers provided in the latest channel are updated regularly for OS upgrades and critical fixes. This allows customers to receive critical patches and CVE fixes faster. Larger changes and new features are saved for minor and major releases. For each major or minor version channel, there is a corresponding "cluster-scoped" channel available. Cluster-scoped channels deploy operators that can manage all namespaces, while non-cluster-scoped channels can only manage resources in their own namespace. Important Cluster-scoped bundles are not compatible with namespace-scoped bundles. Do not try to switch between normal (stable-2.4 for example) channels and cluster-scoped (stable-2.4-cluster-scoped) channels, as this is not supported. 10.5. Upgrading the Ansible Automation Platform Operator To upgrade to the latest version of Ansible Automation Platform Operator on OpenShift Container Platform, you can do the following: Prerequisites Read the Release notes for 2.5 [Optional] You need to deploy all of your Red Hat Ansible Automation Platform services (automation controller, automation hub, Event-Driven Ansible) to the same, single namespace before upgrading to 2.5 (only for existing deployments). For more information see, Migrating from one namespace to another . Review the Backup and recovery for operator environments guide and backup your services: AutomationControllerBackup AutomationHubBackup EDABackup Prodedure Log in to OpenShift Container Platform. Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Subscriptions tab. Change the channel from stable-2.4 to stable-2.5. An InstallPlan is created for the user. Click Preview InstallPlan . Click Approve . Create a Custom Resource (CR) using the Ansible Automation Platform UI. The automation controller and automation hub UIs remain until all SSO configuration is supported in the platform gateway UI. For more information on configuring your updated Ansible Automation Platform Operator, see Configuring the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform . 10.6. Creating Ansible Automation Platform custom resources After upgrading to the latest version of Ansible Automation Platform Operator on OpenShift Container Platform, you can create an Ansible Automation Platform custom resource (CR) that specifies the names of your existing deployments, in the same namespace. Procedure This example outlines the steps to deploy a new Event-Driven Ansible setup after upgrading to the latest version, with existing automation controller and automation hub deployments already in place. The Appendix contains more examples of Ansible Automation Platform CRs for different deployments. Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Details tab. On the Ansible Automation Platform tile click Create instance . From the Create Ansible Automation Platform page enter a name for your instance in the Name field. Click YAML view and paste the following YAML ( aap-existing-controller-and-hub-new-eda.yml ): --- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller #obtain name from controller CR disabled: false eda: disabled: false hub: name: existing-hub disabled: false Click Create . Note You can override the operator's default image for automation controller, automation hub, or platform-resource app images by specifying the preferred image on the YAML spec. This enables upgrading a specific deployment, like a controller, without updating the operator. The recommended approach however, is to upgrade the operator and use the default image values. Verification Navigate to your Ansible Automation Platform Operator deployment and click All instances to verify whether all instances have deployed correctly. You should see the Ansible Automation Platform instance and the deployed AutomationController , EDA , and AutomationHub instances here. Alternatively, you can verify whether all instances deployed correctly by running oc get route in the command line. 10.7. Ansible Automation Platform post-upgrade steps After a successful upgrade to Ansible Automation Platform 2.5, the crucial step is migrating your users to the latest version of the platform. User data and legacy authentication settings from automation controller and private automation hub are carried over during the upgrade process and allow seamless initial access to the platform after upgrade. Customers can log in without additional action. However, to fully transition authentication to use all of the features and capabilities of the 2.5 platform gateway, a manual process is required post-upgrade to leverage the new authentication framework. In the context of upgrading to Ansible Automation Platform 2.5, this manual process is referred to as migration . There are important notes and considerations for each type of user migration, including the following: Admin users Normal users SAML users Be sure to read through the important notes highlighted for each user type to help make the migration process as smooth as possible. 10.7.1. Migrating admin users Upgrades from Ansible Automation Platform 2.4 to 2.5 allows for the migration of administrators for each component with their existing component-level admin privileges maintained. However, escalation of privileges to platform gateway administrator is not automatic during the upgrade process. This ensures a secure privilege escalation process that can be customized to meet the organization's specific needs. Prerequisites Review current admin roles for the individual services in your current deployment. Confirm the users who will require platform gateway admin rights post-upgrade. 10.7.1.1. Key considerations Component-level admin privileges are retained: Administrators for automation controller and automation hub will retain their existing admin privileges for those respective services post-upgrade. For example, an admin of automation controller will continue to have full administration privileges for automation controller resources. Note Users previously designated as automation controller or automation hub administrators are labeled as Normal in the User type column of the Users list view. This is a mischaracterization. You can verify that these users have, in fact, retained their service level administrator privileges, by editing the account: Procedure From the navigation panel of the platform gateway, select Access Management Users . Select the check box for the user that you want to modify. Click the Pencil icon and select Edit user . The Edit user page is displayed where you can see the service level administrator privileges assigned by the User type checkboxes. See Editing a user for more information on these user types. Note Only a platform administrator can escalate your privileges. Escalation to platform gateway admin must be manually configured post-upgrade: During the upgrade process, admin privileges for individual services are not automatically translated to platform administrator privileges. Escalation to platform gateway admin must be granted by the platform administrator after upgrade and migration. Each service admin retains the original scope of their access until the access is changed. As a platform administrator, you can escalate a user's privileges by selecting the Ansible Automation Platform Administrator checkbox. 10.7.2. Migrating normal users When you upgrade from Ansible Automation Platform 2.4 to 2.5, your existing user account is automatically migrated to a single platform account. However, if you have multiple component accounts (such as, automation controller, private automation hub and Event-Driven Ansible), your accounts must be linked to use the centralized features of the platform. 10.7.2.1. Key considerations service accounts are prefixed: Users with accounts on multiple services in 2.4 are migrated as individual users in 2.5 and prefixed to identify the service from which they were migrated. For example, automation hub accounts are prefixed as hub_<username> . Automation controller user names do not include a prefix. Automation controller user accounts take precedence: When an individual user had accounts on multiple services in 2.4, priority is given to their automation controller account during migration, so those are not renamed. Component level roles are retained until user migration is complete: When users log in using an existing service account and do not perform the account linking process, only the roles for that specific service account are available. The migration process is completed once the user performs the account linking process. At that time, all roles for all services are migrated into the new platform gateway user account. 10.7.2.2. Additional resources See Creating a user for more information on user types. 10.7.2.3. Linking your account Ansible Automation Platform 2.5 provides a centralized location for users, teams and organizations to access the platform's services and features. The first time you log in to Ansible Automation Platform 2.5, the platform searches through the existing services to locate a user account with the credentials you entered. When there is a match to an existing account, that account is registered and becomes centrally managed by the platform. Any subsequent component accounts in the system are orphaned and cannot be used to log into the platform. To address this problem, use the account linking procedure to authenticate from any of your existing component accounts and still be recognized by the platform. Linking accounts associates existing component accounts with the same user profile. Prerequisites You have completed the upgrade process and have a legacy Ansible Automation Platform account and credentials. Procedure If you have completed the upgrade process and have a legacy Ansible Automation Platform subscription, follow the account linking procedure below to migrate your account to Ansible Automation Platform 2.5. Navigate to the login page for Ansible Automation Platform. In the login modal, select either I have an automation controller account or I have an automation hub account based on the credentials you have. On the screen, enter the legacy credentials for the component account you selected and click Log in . Note If you are logging in using OIDC credentials, see How to fix broken OIDC redirect after upgrading to AAP 2.5 . If you have successfully linked your account, the screen shows your username with a green checkmark beside it. If you have other legacy accounts that you want to link, enter those account credentials and click Link to link them to your centralized platform gateway account. Click Submit to complete linking your legacy accounts. After your accounts are linked, depending on your authentication method, you might be prompted to create a new username and password. These credentials will replace your legacy credentials for each component account. You can also link your legacy account manually by taking the following steps: Select your user icon at the top right of your screen, and select User details . Select the More Actions icon ... > Link user accounts . Enter the credentials for the account that you want to link. If you encounter an error message telling you that your account could not be authenticated, contact your platform administrator. Note If you log into Ansible Automation Platform for the first time and are prompted to change your username, this is an indication that another user has already logged into Ansible Automation Platform with the same username. To proceed with account migration, follow the prompts to change your username. Ansible Automation Platform uses your password to authenticate which account or accounts belong to you. A diagram of the account linking flow After you have migrated your user account, you can manage your account from the Access Management menu. See Managing access with role based access control . 10.7.3. Migrating Single Sign-On (SSO) users When upgrading from Ansible Automation Platform 2.4 to 2.5, you must migrate your Single Sign-On (SSO) user accounts if you want to continue using SSO capabilities after the upgrade. Follow the steps in this procedure to ensure a smooth SSO user migration. 10.7.3.1. Key considerations SSO configurations are not migrated automatically during upgrade to 2.5: While the legacy authentication settings are carried over during the upgrade process and allow seamless initial access to the platform after upgrade, SSO configurations must be manually migrated over to a new Ansible Automation Platform 2.5 authentication configuration. The legacy configuration acts as a reference to preserve existing authentication capabilities and facilitate the migration process. The legacy authentication configuration should not be modified directly or used after migration is complete. SSO migration is not currently supported in the UI: While migration of SSO accounts is supported in 2.5, the configuration is not supported through the platform UI and must be done through the API /api/gateway/v1/authenticators/ . Migration of SSO must happen before users log in and start account linking: You must enable the Auto migrate users to setting after configuring SSO in 2.5 and before any users log in. Prerequisites You have configured a SSO authentication method in the platform gateway following the steps in Configuring an authentication type . This will be the configuration that you will migrate your SSO users to. Note Ansible Automation Platform 2.4 SSO configurations are renamed during the upgrade process and are displayed in the Authentication Methods list view with a prefix to indicate a legacy configuration: for example, legacy_sso-saml-<entity id> . The Authentication type is also listed as legacy sso . These configurations can not be modified. Procedure Log in to the platform gateway API. Go to /api/gateway/v1/authenticators/ , locate the legacy authenticator and click the link. This opens the HTML form for that authenticator. Select the new platform gateway authenticator from the Auto migrate users to list. Click PUT . Once you set up the auto migrate functionality, you should be able to login with SSO in the platform gateway and it will automatically link any matching accounts from the legacy SSO authenticator. Additional resources Refer to Ansible Automation Platform 2.4 to 2.5. Linking accounts post upgrade, and Setting up SAML authentication for a demonstration of the post upgrade steps. | [
"--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller #obtain name from controller CR disabled: false eda: disabled: false hub: name: existing-hub disabled: false"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/operator-upgrade_operator-platform-doc |
HawtIO Diagnostic Console Guide | HawtIO Diagnostic Console Guide Red Hat build of Apache Camel 4.8 Manage applications with Red Hat build of HawtIO | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/hawtio_diagnostic_console_guide/index |
12.6. Adding Virtual Machines to a Virtual Machine Pool | 12.6. Adding Virtual Machines to a Virtual Machine Pool If you require more virtual machines than originally provisioned in a virtual machine pool, add more machines to the pool. Adding Virtual Machines to a Virtual Machine Pool Click Compute Pools and select the virtual machine pool. Click Edit . Enter the number of additional virtual machines in the Increase number of VMs in pool by field. Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/adding_virtual_machines_to_a_vm_pool |
Chapter 18. Service [v1] | Chapter 18. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object 18.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ServiceSpec describes the attributes that a user creates on a service. status object ServiceStatus represents the current status of a service. 18.1.1. .spec Description ServiceSpec describes the attributes that a user creates on a service. Type object Property Type Description allocateLoadBalancerNodePorts boolean allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. clusterIP string clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies clusterIPs array (string) ClusterIPs is a list of IP addresses assigned to this service, and are usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be empty) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. If this field is not specified, it will be initialized from the clusterIP field. If this field is specified, clients must ensure that clusterIPs[0] and clusterIP have the same value. This field may hold a maximum of two entries (dual-stack IPs, in either order). These IPs must correspond to the values of the ipFamilies field. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies externalIPs array (string) externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. externalName string externalName is the external reference that discovery mechanisms will return as an alias for this service (e.g. a DNS CNAME record). No proxying will be involved. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) and requires type to be "ExternalName". externalTrafficPolicy string externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's "externally-facing" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get "Cluster" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node. Possible enum values: - "Cluster" - "Cluster" routes traffic to all endpoints. - "Local" - "Local" preserves the source IP of the traffic by routing only to endpoints on the same node as the traffic was received on (dropping the traffic if there are no local endpoints). healthCheckNodePort integer healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set. internalTrafficPolicy string InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Possible enum values: - "Cluster" routes traffic to all endpoints. - "Local" routes traffic only to endpoints on the same node as the client pod (dropping the traffic if there are no local endpoints). ipFamilies array (string) IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName. This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. ipFamilyPolicy string IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be "SingleStack" (a single IP family), "PreferDualStack" (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or "RequireDualStack" (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName. Possible enum values: - "PreferDualStack" indicates that this service prefers dual-stack when the cluster is configured for dual-stack. If the cluster is not configured for dual-stack the service will be assigned a single IPFamily. If the IPFamily is not set in service.spec.ipFamilies then the service will be assigned the default IPFamily configured on the cluster - "RequireDualStack" indicates that this service requires dual-stack. Using IPFamilyPolicyRequireDualStack on a single stack cluster will result in validation errors. The IPFamilies (and their order) assigned to this service is based on service.spec.ipFamilies. If service.spec.ipFamilies was not provided then it will be assigned according to how they are configured on the cluster. If service.spec.ipFamilies has only one entry then the alternative IPFamily will be added by apiserver - "SingleStack" indicates that this service is required to have a single IPFamily. The IPFamily assigned is based on the default IPFamily used by the cluster or as identified by service.spec.ipFamilies field loadBalancerClass string loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type. loadBalancerIP string Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version. loadBalancerSourceRanges array (string) If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/ ports array The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies ports[] object ServicePort contains information on service's port. publishNotReadyAddresses boolean publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready. The primary use case for setting this field is for a StatefulSet's Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery. The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered "ready" even if the Pods themselves are not. Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior. selector object (string) Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/ sessionAffinity string Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Possible enum values: - "ClientIP" is the Client IP based. - "None" - no session affinity. sessionAffinityConfig object SessionAffinityConfig represents the configurations of session affinity. type string type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types Possible enum values: - "ClusterIP" means a service will only be accessible inside the cluster, via the cluster IP. - "ExternalName" means a service consists of only a reference to an external name that kubedns or equivalent will return as a CNAME record, with no exposing or proxying of any pods involved. - "LoadBalancer" means a service will be exposed via an external load balancer (if the cloud provider supports it), in addition to 'NodePort' type. - "NodePort" means a service will be exposed on one port of every node, in addition to 'ClusterIP' type. 18.1.2. .spec.ports Description The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Type array 18.1.3. .spec.ports[] Description ServicePort contains information on service's port. Type object Required port Property Type Description appProtocol string The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service. nodePort integer The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport port integer The port that will be exposed by this service. protocol string The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. targetPort IntOrString Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service 18.1.4. .spec.sessionAffinityConfig Description SessionAffinityConfig represents the configurations of session affinity. Type object Property Type Description clientIP object ClientIPConfig represents the configurations of Client IP based session affinity. 18.1.5. .spec.sessionAffinityConfig.clientIP Description ClientIPConfig represents the configurations of Client IP based session affinity. Type object Property Type Description timeoutSeconds integer timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && ⇐86400(for 1 day) if ServiceAffinity == "ClientIP". Default value is 10800(for 3 hours). 18.1.6. .status Description ServiceStatus represents the current status of a service. Type object Property Type Description conditions array (Condition) Current service state loadBalancer object LoadBalancerStatus represents the status of a load-balancer. 18.1.7. .status.loadBalancer Description LoadBalancerStatus represents the status of a load-balancer. Type object Property Type Description ingress array Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. ingress[] object LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. 18.1.8. .status.loadBalancer.ingress Description Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. Type array 18.1.9. .status.loadBalancer.ingress[] Description LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. Type object Property Type Description hostname string Hostname is set for load-balancer ingress points that are DNS based (typically AWS load-balancers) ip string IP is set for load-balancer ingress points that are IP based (typically GCE or OpenStack load-balancers) ports array Ports is a list of records of service ports If used, every port defined in the service should have an entry in it ports[] object 18.1.10. .status.loadBalancer.ingress[].ports Description Ports is a list of records of service ports If used, every port defined in the service should have an entry in it Type array 18.1.11. .status.loadBalancer.ingress[].ports[] Description Type object Required port protocol Property Type Description error string Error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use CamelCase names - cloud provider specific error values must have names that comply with the format foo.example.com/CamelCase. port integer Port is the port number of the service port of which status is recorded here protocol string Protocol is the protocol of the service port of which status is recorded here The supported values are: "TCP", "UDP", "SCTP" Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 18.2. API endpoints The following API endpoints are available: /api/v1/services GET : list or watch objects of kind Service /api/v1/watch/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services DELETE : delete collection of Service GET : list or watch objects of kind Service POST : create a Service /api/v1/watch/namespaces/{namespace}/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services/{name} DELETE : delete a Service GET : read the specified Service PATCH : partially update the specified Service PUT : replace the specified Service /api/v1/watch/namespaces/{namespace}/services/{name} GET : watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/services/{name}/status GET : read status of the specified Service PATCH : partially update status of the specified Service PUT : replace status of the specified Service 18.2.1. /api/v1/services Table 18.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Service Table 18.2. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty 18.2.2. /api/v1/watch/services Table 18.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 18.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 18.2.3. /api/v1/namespaces/{namespace}/services Table 18.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 18.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Service Table 18.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 18.8. Body parameters Parameter Type Description body DeleteOptions schema Table 18.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Service Table 18.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.11. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty HTTP method POST Description create a Service Table 18.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.13. Body parameters Parameter Type Description body Service schema Table 18.14. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 202 - Accepted Service schema 401 - Unauthorized Empty 18.2.4. /api/v1/watch/namespaces/{namespace}/services Table 18.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 18.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 18.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 18.2.5. /api/v1/namespaces/{namespace}/services/{name} Table 18.18. Global path parameters Parameter Type Description name string name of the Service namespace string object name and auth scope, such as for teams and projects Table 18.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Service Table 18.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 18.21. Body parameters Parameter Type Description body DeleteOptions schema Table 18.22. HTTP responses HTTP code Reponse body 200 - OK Service schema 202 - Accepted Service schema 401 - Unauthorized Empty HTTP method GET Description read the specified Service Table 18.23. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Service Table 18.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 18.25. Body parameters Parameter Type Description body Patch schema Table 18.26. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Service Table 18.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.28. Body parameters Parameter Type Description body Service schema Table 18.29. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty 18.2.6. /api/v1/watch/namespaces/{namespace}/services/{name} Table 18.30. Global path parameters Parameter Type Description name string name of the Service namespace string object name and auth scope, such as for teams and projects Table 18.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 18.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 18.2.7. /api/v1/namespaces/{namespace}/services/{name}/status Table 18.33. Global path parameters Parameter Type Description name string name of the Service namespace string object name and auth scope, such as for teams and projects Table 18.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Service Table 18.35. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Service Table 18.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 18.37. Body parameters Parameter Type Description body Patch schema Table 18.38. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Service Table 18.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.40. Body parameters Parameter Type Description body Service schema Table 18.41. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_apis/service-v1 |
Chapter 3. Troubleshooting installation issues | Chapter 3. Troubleshooting installation issues To troubleshoot a failed MicroShift installation, you can run an sos report. Use the sos report command to generate a detailed report that shows all of the enabled plugins and data from the different components and applications in a system. 3.1. Gathering data from an sos report Prerequisites You must have the sos package installed. Procedure Log into the failing host as a root user. Perform the debug report creation procedure by running the following command: USD microshift-sos-report Example output sosreport (version 4.5.1) This command will collect diagnostic and configuration information from this Red Hat Enterprise Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos.o0sznf_8 and may be provided to a Red Hat support representative. Any information provided to Red Hat will be treated in accordance with the published support policies at: Distribution Website : https://www.redhat.com/ Commercial Support : https://www.access.redhat.com/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... Running plugins. Please wait ... Starting 1/2 microshift [Running: microshift] Starting 2/2 microshift_ovn [Running: microshift microshift_ovn] Finishing plugins [Running: microshift] Finished running plugins Found 1 total reports to obfuscate, processing up to 4 concurrently sosreport-microshift-rhel9-2023-03-31-axjbyxw : Beginning obfuscation... sosreport-microshift-rhel9-2023-03-31-axjbyxw : Obfuscation completed Successfully obfuscated 1 report(s) Creating compressed archive... A mapping of obfuscated elements is available at /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-private_map Your sosreport has been generated and saved in: /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-obfuscated.tar.xz Size 444.14KiB Owner root sha256 922e5ff2db25014585b7c6c749d2c44c8492756d619df5e9838ce863f83d4269 Please send this file to your support representative. 3.2. Additional resources About MicroShift sos reports Generating an sos report for technical support | [
"microshift-sos-report",
"sosreport (version 4.5.1) This command will collect diagnostic and configuration information from this Red Hat Enterprise Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos.o0sznf_8 and may be provided to a Red Hat support representative. Any information provided to Red Hat will be treated in accordance with the published support policies at: Distribution Website : https://www.redhat.com/ Commercial Support : https://www.access.redhat.com/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive Setting up plugins Running plugins. Please wait Starting 1/2 microshift [Running: microshift] Starting 2/2 microshift_ovn [Running: microshift microshift_ovn] Finishing plugins [Running: microshift] Finished running plugins Found 1 total reports to obfuscate, processing up to 4 concurrently sosreport-microshift-rhel9-2023-03-31-axjbyxw : Beginning obfuscation sosreport-microshift-rhel9-2023-03-31-axjbyxw : Obfuscation completed Successfully obfuscated 1 report(s) Creating compressed archive A mapping of obfuscated elements is available at /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-private_map Your sosreport has been generated and saved in: /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-obfuscated.tar.xz Size 444.14KiB Owner root sha256 922e5ff2db25014585b7c6c749d2c44c8492756d619df5e9838ce863f83d4269 Please send this file to your support representative."
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/troubleshooting/microshift-installing-troubleshooting |
probe::workqueue.execute | probe::workqueue.execute Name probe::workqueue.execute - Executing deferred work Synopsis workqueue.execute Values wq_thread task_struct of the workqueue thread work_func pointer to handler function work work_struct* being executed | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-workqueue-execute |
Preface | Preface The contents within this guide provide an overview of Clair for Red Hat Quay, running Clair on standalone Red Hat Quay and Operator deployments, and advanced Clair configuration. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/vulnerability_reporting_with_clair_on_red_hat_quay/pr01 |
2.2. Starting the Virtual Machine | 2.2. Starting the Virtual Machine 2.2.1. Starting a Virtual Machine Starting Virtual Machines Click Compute Virtual Machines and select a virtual machine with a status of Down . Click Run . The Status of the virtual machine changes to Up , and the operating system installation begins. Open a console to the virtual machine if one does not open automatically. Note A virtual machine will not start on a host that the CPU is overloaded on. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes but these values can be changed using scheduling policies. See Scheduling Policies in the Administration Guide for more information. 2.2.2. Opening a Console to a Virtual Machine Use Remote Viewer to connect to a virtual machine. Connecting to Virtual Machines Install Remote Viewer if it is not already installed. See Section 1.4.1, "Installing Console Components" . Click Compute Virtual Machines and select a virtual machine. Click Console . A console.vv file will be downloaded. Click on the file and a console window will automatically open for the virtual machine. Note You can configure the system to automatically connect to a virtual machine. See Section 2.2.4, "Automatically Connecting to a Virtual Machine" . 2.2.3. Opening a Serial Console to a Virtual Machine You can access a virtual machine's serial console from the command line instead of opening a console from the Administration Portal or the VM Portal. The serial console is emulated through VirtIO channels, using SSH and key pairs. The Manager acts as a proxy for the connection, provides information about virtual machine placement, and stores the authentication keys. You can add public keys for each user from either the Administration Portal or the VM Portal. You can access serial consoles for only those virtual machines for which you have appropriate permissions. Important To access the serial console of a virtual machine, the user must have UserVmManager , SuperUser , or UserInstanceManager permission on that virtual machine. These permissions must be explicitly defined for each user. It is not enough to assign these permissions to Everyone . The serial console is accessed through TCP port 2222 on the Manager. This port is opened during engine-setup on new installations. To change the port, see ovirt-vmconsole/README . Using a serial console requires configuring the firewall rules. For more information, see: Rule "M3" in the Red Hat Virtualization Manager Firewall Requirements Rule "H2" in the Host Firewall Requirements The serial console relies on the ovirt-vmconsole package and the ovirt-vmconsole-proxy on the Manager, and the ovirt-vmconsole package and the ovirt-vmconsole-host package on the virtualization hosts. These packages are installed by default on new installations. To install the packages on existing installations, reinstall the host. See Reinstalling Hosts in the Administration Guide . Enabling a Virtual Machine's Serial Console On the virtual machine whose serial console you are accessing, add the following lines to /etc/default/grub : Note GRUB_CMDLINE_LINUX_DEFAULT applies this configuration only to the default menu entry. Use GRUB_CMDLINE_LINUX to apply the configuration to all the menu entries. If these lines already exist in /etc/default/grub , update them. Do not duplicate them. Rebuild /boot/grub2/grub.cfg : BIOS-based machines: UEFI-based machines: See GRUB 2 over a Serial Console in the Red Hat Enterprise Linux 7 System Administrator's Guide for details. On the client machine from which you are accessing the virtual machine serial console, generate an SSH key pair. The Manager supports standard SSH key types, for example, an RSA key: This command generates a public key and a private key. In the Administration Portal or the VM Portal, click the name of the signed-in user on the header bar and click Options to open the Edit Options window. In the User's Public Key text field, paste the public key of the client machine that will be used to access the serial console. Click Compute Virtual Machines and select a virtual machine. Click Edit . In the Console tab of the Edit Virtual Machine window, select the Enable VirtIO serial console check box. Connecting to a Virtual Machine's Serial Console On the client machine, connect to the virtual machine's serial console: If a single virtual machine is available, this command connects the user to that virtual machine: If more than one virtual machine is available, this command lists the available virtual machines and their IDs: Enter the number of the machine to which you want to connect, and press Enter . Alternatively, connect directly to a virtual machine using its unique identifier or its name: Disconnecting from a Virtual Machine's Serial Console Press any key followed by ~ . to close a serial console session. If the serial console session is disconnected abnormally, a TCP timeout occurs. You will be unable to reconnect to the virtual machine's serial console until the timeout period expires. 2.2.4. Automatically Connecting to a Virtual Machine Once you have logged in, you can automatically connect to a single running virtual machine. This can be configured in the VM Portal. Automatically Connecting to a Virtual Machine In the Virtual Machines page, click the name of the virtual machine to go to the details view. Click the pencil icon beside Console and set Connect automatically to ON . The time you log into the VM Portal, if you have only one running virtual machine, you will automatically connect to that machine. | [
"GRUB_CMDLINE_LINUX_DEFAULT=\"console=tty0 console=ttyS0,115200n8\" GRUB_TERMINAL=\"console serial\" GRUB_SERIAL_COMMAND=\"serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1\"",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg",
"ssh-keygen -t rsa -b 2048 -C \" user@domain \" -f .ssh/serialconsolekey",
"ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN -i .ssh/serialconsolekey Red Hat Enterprise Linux Server release 6.7 (Santiago) Kernel 2.6.32-573.3.1.el6.x86_64 on an x86_64 USER login:",
"ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN -i .ssh/serialconsolekey list 1. vm1 [vmid1] 2. vm2 [vmid2] 3. vm3 [vmid3] > 2 Red Hat Enterprise Linux Server release 6.7 (Santiago) Kernel 2.6.32-573.3.1.el6.x86_64 on an x86_64 USER login:",
"ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN connect --vm-id vmid1",
"ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN connect --vm-name vm1"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/starting_the_virtual_machine |
Chapter 4. 21 August 2024 | Chapter 4. 21 August 2024 This release includes the following new feature: Red Hat Ansible Lightspeed trial Existing Ansible users can now start a free 90-day Red Hat Ansible Lightspeed cloud service trial. You can create single-task and multitask recommendations, generate playbooks, and view playbook explanations with a trial account. To start your Red Hat Ansible Lightspeed trial, you need a trial or paid subscription to the Red Hat Ansible Automation Platform; however, you do not need a trial or paid subscription to IBM watsonx Code Assistant. For more information, see Starting a trial of Red Hat Ansible Lightspeed . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_release_notes/lightspeed-key-features-august2024_lightspeed-release-notes |
probe::sunrpc.clnt.restart_call | probe::sunrpc.clnt.restart_call Name probe::sunrpc.clnt.restart_call - Restart an asynchronous RPC call Synopsis sunrpc.clnt.restart_call Values servername the server machine name tk_priority the task priority xid the transmission id prog the RPC program number tk_runstate the task run status tk_pid the debugging aid of task tk_flags the task flags | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-clnt-restart-call |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_authentication_and_authorization_in_rhel/proc_providing-feedback-on-red-hat-documentation_configuring-authentication-and-authorization-in-rhel |
Chapter 1. Overview of authentication and authorization | Chapter 1. Overview of authentication and authorization 1.1. Glossary of common terms for OpenShift Container Platform authentication and authorization This glossary defines common terms that are used in OpenShift Container Platform authentication and authorization. authentication An authentication determines access to an OpenShift Container Platform cluster and ensures only authenticated users access the OpenShift Container Platform cluster. authorization Authorization determines whether the identified user has permissions to perform the requested action. bearer token Bearer token is used to authenticate to API with the header Authorization: Bearer <token> . Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). config map A config map provides a way to inject configuration data into the pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. containers Lightweight and executable images that consist of software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, public or private cloud, or your local host. Custom Resource (CR) A CR is an extension of the Kubernetes API. group A group is a set of users. A group is useful for granting permissions to multiple users one time. HTPasswd HTPasswd updates the files that store usernames and password for authentication of HTTP users. Keystone Keystone is an Red Hat OpenStack Platform (RHOSP) project that provides identity, token, catalog, and policy services. Lightweight directory access protocol (LDAP) LDAP is a protocol that queries user information. manual mode In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). mint mode Mint mode is the default and recommended best practice setting for the Cloud Credential Operator (CCO) to use on the platforms for which it is supported. In this mode, the CCO uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. namespace A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. node A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OAuth client OAuth client is used to get a bearer token. OAuth server The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. OpenID Connect The OpenID Connect is a protocol to authenticate the users to use single sign-on (SSO) to access sites that use OpenID Providers. passthrough mode In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. pod A pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. regular users Users that are created automatically in the cluster upon first login or via the API. request header A request header is an HTTP header that is used to provide information about HTTP request context, so that the server can track the response of the request. role-based access control (RBAC) A key security control to ensure that cluster users and workloads have access to only the resources required to execute their roles. service accounts Service accounts are used by the cluster components or applications. system users Users that are created automatically when the cluster is installed. users Users is an entity that can make requests to API. 1.2. About authentication in OpenShift Container Platform To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, users must first authenticate to the OpenShift Container Platform API in some way. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. Note If you do not present a valid access token or certificate, your request is unauthenticated and you receive an HTTP 401 error. An administrator can configure authentication through the following tasks: Configuring an identity provider: You can define any supported identity provider in OpenShift Container Platform and add it to your cluster. Configuring the internal OAuth server : The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. You can configure the token duration and inactivity timeout, and customize the internal OAuth server URL. Note Users can view and manage OAuth tokens owned by them . Registering an OAuth client: OpenShift Container Platform includes several default OAuth clients . You can register and configure additional OAuth clients . Note When users send a request for an OAuth token, they must specify either a default or custom OAuth client that receives and uses the token. Managing cloud provider credentials using the Cloud Credentials Operator : Cluster components use cloud provider credentials to get permissions required to perform cluster-related tasks. Impersonating a system admin user: You can grant cluster administrator permissions to a user by impersonating a system admin user . 1.3. About authorization in OpenShift Container Platform Authorization involves determining whether the identified user has permissions to perform the requested action. Administrators can define permissions and assign them to users using the RBAC objects, such as rules, roles, and bindings . To understand how authorization works in OpenShift Container Platform, see Evaluating authorization . You can also control access to an OpenShift Container Platform cluster through projects and namespaces . Along with controlling user access to a cluster, you can also control the actions a pod can perform and the resources it can access using security context constraints (SCCs) . You can manage authorization for OpenShift Container Platform through the following tasks: Viewing local and cluster roles and bindings. Creating a local role and assigning it to a user or group. Creating a cluster role and assigning it to a user or group: OpenShift Container Platform includes a set of default cluster roles . You can create additional cluster roles and add them to a user or group . Creating a cluster-admin user: By default, your cluster has only one cluster administrator called kubeadmin . You can create another cluster administrator . Before creating a cluster administrator, ensure that you have configured an identity provider. Note After creating the cluster admin user, delete the existing kubeadmin user to improve cluster security. Creating service accounts: Service accounts provide a flexible way to control API access without sharing a regular user's credentials. A user can create and use a service account in applications and also as an OAuth client . Scoping tokens : A scoped token is a token that identifies as a specific user who can perform only specific operations. You can create scoped tokens to delegate some of your permissions to another user or a service account. Syncing LDAP groups: You can manage user groups in one place by syncing the groups stored in an LDAP server with the OpenShift Container Platform user groups. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/overview-of-authentication-authorization |
8.4.2. Setting [repository] Options | 8.4.2. Setting [repository] Options The [ repository ] sections, where repository is a unique repository ID such as my_personal_repo (spaces are not permitted), allow you to define individual Yum repositories. To avoid conflicts, custom repositories should not use names used by Red Hat repositories. The following is a bare-minimum example of the form a [ repository ] section takes: Every [ repository ] section must contain the following directives: name = repository_name where repository_name is a human-readable string describing the repository. baseurl = repository_url where repository_url is a URL to the directory where the repodata directory of a repository is located: If the repository is available over HTTP, use: http://path/to/repo If the repository is available over FTP, use: ftp://path/to/repo If the repository is local to the machine, use: file:///path/to/local/repo If a specific online repository requires basic HTTP authentication, you can specify your user name and password by prepending it to the URL as username : password @ link . For example, if a repository on http://www.example.com/repo/ requires a user name of " user " and a password of " password " , then the baseurl link could be specified as http:// user : password @www.example.com/repo/ . Usually this URL is an HTTP link, such as: Note that Yum always expands the USDreleasever , USDarch , and USDbasearch variables in URLs. For more information about Yum variables, see Section 8.4.3, "Using Yum Variables" . Another useful [ repository ] directive is the following: enabled = value where value is one of: 0 - Do not include this repository as a package source when performing updates and installs. This is an easy way of quickly turning repositories on and off, which is useful when you desire a single package from a repository that you do not want to enable for updates or installs. 1 - Include this repository as a package source. Turning repositories on and off can also be performed by passing either the --enablerepo= repo_name or --disablerepo= repo_name option to yum , or through the Add/Remove Software window of the PackageKit utility. Many more [ repository ] options exist. For a complete list, see the [repository] OPTIONS section of the yum.conf (5) manual page. Example 8.6. A sample /etc/yum.repos.d/redhat.repo file The following is a sample /etc/yum.repos.d/redhat.repo file: | [
"[ repository ] name= repository_name baseurl= repository_url",
"baseurl=http://path/to/repo/releases/USDreleasever/server/USDbasearch/os/",
"# Red Hat Repositories Managed by (rhsm) subscription-manager # [red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-rpms] name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement) (RPMs) baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-6/releases/USDreleasever/USDbasearch/scalablefilesystem/os enabled = 1 gpgcheck = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify = 1 sslcacert = /etc/rhsm/ca/redhat-uep.pem sslclientkey = /etc/pki/entitlement/key.pem sslclientcert = /etc/pki/entitlement/11300387955690106.pem [red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-source-rpms] name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement) (Source RPMs) baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-6/releases/USDreleasever/USDbasearch/scalablefilesystem/source/SRPMS enabled = 0 gpgcheck = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify = 1 sslcacert = /etc/rhsm/ca/redhat-uep.pem sslclientkey = /etc/pki/entitlement/key.pem sslclientcert = /etc/pki/entitlement/11300387955690106.pem [red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-debug-rpms] name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement) (Debug RPMs) baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-6/releases/USDreleasever/USDbasearch/scalablefilesystem/debug enabled = 0 gpgcheck = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify = 1 sslcacert = /etc/rhsm/ca/redhat-uep.pem sslclientkey = /etc/pki/entitlement/key.pem sslclientcert = /etc/pki/entitlement/11300387955690106.pem"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Setting_repository_Options |
function::kernel_long | function::kernel_long Name function::kernel_long - Retrieves a long value stored in kernel memory Synopsis Arguments addr The kernel address to retrieve the long from Description Returns the long value from a given kernel memory address. Reports an error when reading from the given address fails. | [
"kernel_long:long(addr:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-kernel-long |
Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations | Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations In OpenShift Container Platform version 4.14, you can install a customized cluster on infrastructure that the installation program provisions on IBM Power Virtual Server. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Important IBM Power Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select powervs as the platform to target. Select the region to deploy the cluster to. Select the zone to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 4.6.1. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 7 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: "ibmcloud-resource-group" 8 serviceInstanceID: "powervs-region-service-instance-id" vpcRegion : vpc-region publish: External pullSecret: '{"auths": ...}' 9 sshKey: ssh-ed25519 AAAA... 10 1 4 If you do not provide these parameters and values, the installation program provides the default value. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 7 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 8 The name of an existing resource group. 9 Required. The installation program prompts you for this value. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.6.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 4.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 4.12. steps Customize your cluster If necessary, you can opt out of remote health reporting | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 7 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: \"ibmcloud-resource-group\" 8 serviceInstanceID: \"powervs-region-service-instance-id\" vpcRegion : vpc-region publish: External pullSecret: '{\"auths\": ...}' 9 sshKey: ssh-ed25519 AAAA... 10",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-customizations |
1.3.2.3. Inherently Insecure Services | 1.3.2.3. Inherently Insecure Services Even the most vigilant organization can fall victim to vulnerabilities if the network services they choose are inherently insecure. For instance, there are many services developed under the assumption that they are used over trusted networks; however, this assumption fails as soon as the service becomes available over the Internet - which is itself inherently untrusted. One category of insecure network services are those that require unencrypted user names and passwords for authentication. Telnet and FTP are two such services. If packet sniffing software is monitoring traffic between the remote user and such a service user names and passwords can be easily intercepted. Inherently, such services can also more easily fall prey to what the security industry terms the man-in-the-middle attack. In this type of attack, an attacker redirects network traffic by tricking a cracked name server on the network to point to his machine instead of the intended server. Once someone opens a remote session to the server, the attacker's machine acts as an invisible conduit, sitting quietly between the remote service and the unsuspecting user capturing information. In this way an attacker can gather administrative passwords and raw data without the server or the user realizing it. Another category of insecure services include network file systems and information services such as NFS or NIS, which are developed explicitly for LAN usage but are, unfortunately, extended to include WANs (for remote users). NFS does not, by default, have any authentication or security mechanisms configured to prevent an attacker from mounting the NFS share and accessing anything contained therein. NIS, as well, has vital information that must be known by every computer on a network, including passwords and file permissions, within a plain text ASCII or DBM (ASCII-derived) database. An attacker who gains access to this database can then access every user account on a network, including the administrator's account. By default, Red Hat Enterprise Linux is released with all such services turned off. However, since administrators often find themselves forced to use these services, careful configuration is critical. Refer to Section 2.2, "Server Security" for more information about setting up services in a safe manner. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-threats_to_server_security-inherently_insecure_services |
A.3.2. Commands and Interfaces that Support udev | A.3.2. Commands and Interfaces that Support udev Table A.2, "dmsetup Commands to Support udev" summarizes the dmsetup commands that support udev integration. Table A.2. dmsetup Commands to Support udev Command Description dmsetup udevcomplete Used to notify that udev has completed processing the rules and unlocks waiting process (called from within udev rules in 95-dm-notify.rules ). dmsetup udevcomplete_all Used for debugging purposes to manually unlock all waiting processes. dmsetup udevcookies Used for debugging purposes, to show all existing cookies (system-wide semaphores). dmsetup udevcreatecookie Used to create a cookie (semaphore) manually. This is useful to run more processes under one synchronization resource. dmsetup udevreleasecookie Used to wait for all udev processing related to all processes put under that one synchronization cookie. The dmsetup options that support udev integration are as follows. --udevcookie Needs to be defined for all dmsetup processes we would like to add into a udev transaction. It is used in conjunction with udevcreatecookie and udevreleasecookie : Besides using the --udevcookie option, you can just export the variable into an environment of the process: --noudevrules Disables udev rules. Nodes/symlinks will be created by libdevmapper itself (the old way). This option is for debugging purposes, if udev does not work correctly. --noudevsync Disables udev synchronization. This is also for debugging purposes. For more information on the dmsetup and its options, see the dmsetup (8) man page. The LVM commands support the following options that support udev integration: --noudevrules : as for the dmsetup command, disables udev rules. --noudevsync : as for the dmsetup command, disables udev synchronization. The lvm.conf file includes the following options that support udev integration: udev_rules : enables/disables udev_rules for all LVM2 commands globally. udev_sync : enables/disables udev synchronization for all LVM commands globally. For more information on the lvm.conf file options, see the inline comments in the lvm.conf file. | [
"COOKIE=USD(dmsetup udevcreatecookie) dmsetup command --udevcookie USDCOOKIE . dmsetup command --udevcookie USDCOOKIE . . dmsetup command --udevcookie USDCOOKIE . dmsetup udevreleasecookie --udevcookie USDCOOKIE",
"export DM_UDEV_COOKIE=USD(dmsetup udevcreatecookie) dmsetup command dmsetup command dmsetup command"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/udev_commands_interfaces |
Chapter 44. Kernel | Chapter 44. Kernel Heterogeneous memory management included as a Technology Preview Red Hat Enterprise Linux 7.3 introduced the heterogeneous memory management (HMM) feature as a Technology Preview. This feature has been added to the kernel as a helper layer for devices that want to mirror a process address space into their own memory management unit (MMU). Thus a non-CPU device processor is able to read system memory using the unified system address space. To enable this feature, add experimental_hmm=enable to the kernel command line. (BZ#1230959) criu rebased to version 3.5 Red Hat Enterprise Linux 7.2 introduced the criu tool as a Technology Preview. This tool implements Checkpoint/Restore in User-space (CRIU) , which can be used to freeze a running application and store it as a collection of files. Later, the application can be restored from its frozen state. Note that the criu tool depends on Protocol Buffers , a language-neutral, platform-neutral extensible mechanism for serializing structured data. The protobuf and protobuf-c packages, which provide this dependency, were also introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. In Red Hat Enterprise Linux 7.5, the criu packages have been upgraded to upstream version 3.5, which provides a number of bug fixes and enhancements. In addition, support for IBM Z and the 64-bit ARM architecture has been added. (BZ# 1400230 , BZ#1464596) kexec as a Technology Preview The kexec system call has been provided as a Technology Preview. This system call enables loading and booting into another kernel from the currently running kernel, thus performing the function of the boot loader from within the kernel. Hardware initialization, which is normally done during a standard system boot, is not performed during a kexec boot, which significantly reduces the time required for a reboot. (BZ#1460849) kexec fast reboot as a Technology Preview As a Technology Preview, this update adds the kexec fast reboot feature, which makes the reboot significantly faster. To use this feature, you must load the kexec kernel manually, and then reboot the operating system. It is not possible to make kexec fast reboot as the default reboot action. Special case is using kexec fast reboot for Anaconda . It still does not enable to make kexec fast reboot default. However, when used with Anaconda , the operating system can automatically use kexec fast reboot after the installation is complete in case that user boots kernel with the anaconda option. To schedule a kexec reboot, use the inst.kexec command on the kernel command line, or include a reboot --kexec line in the Kickstart file. (BZ#1464377) Unprivileged access to name spaces can be enabled as a Technology Preview You can now set the namespace.unpriv_enable kernel command-line option if required, as a Technology Preview. The default setting is off. When set to 1 , issuing a call to the clone() function with the flag CLONE_NEWNS as an unprivileged user no longer returns an error and allows the operation. However, to enable the unprivileged access to name spaces, the CAP_SYS_ADMIN flag has to be set in some user name space to create a mount name space. (BZ#1350553) SCSI-MQ as a Technology Preview in the qla2xxx driver The qla2xxx& driver updated in Red Hat Enterprise Linux 7.4 can now enable the use of SCSI-MQ (multiqueue) with the ql2xmqsupport=1 module parameter. The default value is 0 (disabled). The SCSI-MQ functinality is provided as a Technology Preview when used with the qla2xxx driver. Note that a recent performance testing at Red Hat with async IO over Fibre Channel adapters using SCSI-MQ has shown significant performance degradation under certain conditions. A fix is being tested but was not ready in time for Red Hat Enterprise Linux 7.4 General Availability. (BZ#1414957) NVMe over Fibre Channel is now available as a Technology Preview The NVMe over Fibre Channel transport type is now available as a Technology Preview. NVMe over Fibre Channel is an additional fabric transport type for the Nonvolatile Memory Express (NVMe) protocol, in addition to the Remote Direct Memory Access (RDMA) protocol that was previously introduced in Red Hat Enterprise Linux. To enable NVMe over Fibre Channel in the lpfc driver, edit the /etc/modprobe.d/lpfc.conf file and add one or both of the following options: To enable the NVMe mode of operation, add the lpfc_enable_fc4_type=3 option. To enable target mode, add the lpfc_enable_nvmet=<wwpn list> option, where <wwpn list> is a comma-separated list of World-Wide Port Name (WWPN) values with the 0x prefix. To configure an NVMe target, use the nvmetcli utility. NVMe over Fibre Channel provides a higher-performance, lower-latency I/O protocol over existing Fibre Channel infrastructure. This is especially important with solid-state storage arrays, because it allows the performance benefits of NVMe storage to be passed through the fabric transport, rather than being encapsulated in a different protocol, SCSI. In Red Hat Enterprise Linux 7.5, NVMe over Fibre Channel is available only with Broadcom 32Gbit adapters, which use the lpfc driver. (BZ# 1387768 , BZ#1454386) perf cqm has been replaced by resctrl The Intel Cache Allocation Technology (CAT) was introduced in Red Hat Enterprise Linux 7.4 as a Technology Preview. However, the perf cqm tool did not work correctly due to an incompatibility between perf infrastructure and Cache Quality of Service Monitoring (CQM) hardware support. Consequently, multiple problems occurred when using perf cqm . These problems included most notably: perf cqm did not support the group of tasks which is allocated using resctrl perf cqm gave random and inaccurate data due to several problems with recycling perf cqm did not provide enough support when running different kinds of events together (the different events are, for example, tasks, system-wide, and cgroup events) perf cqm provided only partial support for cgroup events The partial support for cgroup events did not work in cases with a hierarchy of cgroup events, or when monitoring a task in a cgroup and the cgroup together Monitoring tasks for the lifetime caused perf overhead perf cqm reported the aggregate cache occupancy or memory bandwidth over all sockets, while in most cloud and VMM-bases use cases the individual per-socket usage is needed With this update, perf cqm has been replaced by the approach based on the resctrl file system, which address all of the aforementioned problems. (BZ# 1457533 , BZ#1288964) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/technology_previews_kernel |
Installing OpenShift Serverless | Installing OpenShift Serverless Red Hat OpenShift Serverless 1.33 Installing the Serverless Operator, Knative CLI, Knative Serving, and Knative Eventing Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/installing_openshift_serverless/index |
Chapter 11. Preparing and uploading cloud images by using RHEL image builder | Chapter 11. Preparing and uploading cloud images by using RHEL image builder RHEL image builder can create custom system images ready for use on various cloud platforms. To use your customized RHEL system image in a cloud, create the system image with RHEL image builder by using the chosen output type, configure your system for uploading the image, and upload the image to your cloud account. You can push customized image clouds through the Image Builder application in the RHEL web console, available for a subset of the service providers that we support, such as AWS and Microsoft Azure clouds. See Creating and automatically uploading images directly to AWS Cloud AMI and Creating and automatically uploading VHD images directly to Microsoft Azure cloud . 11.1. Preparing and uploading AMI images to AWS You can create custom images and can update them, either manually or automatically, to the AWS cloud with RHEL image builder. 11.1.1. Preparing to manually upload AWS AMI images Before uploading an AWS AMI image, you must configure a system for uploading the images. Prerequisites You must have an Access Key ID configured in the AWS IAM account manager . You must have a writable S3 bucket prepared. See Creating S3 bucket . Procedure Install Python 3 and the pip tool: Install the AWS command-line tools with pip : Set your profile. The terminal prompts you to provide your credentials, region and output format: Define a name for your bucket and create a bucket: Replace bucketname with the actual bucket name. It must be a globally unique name. As a result, your bucket is created. To grant permission to access the S3 bucket, create a vmimport S3 Role in the AWS Identity and Access Management (IAM), if you have not already done so in the past: Create a trust-policy.json file with the trust policy configuration, in the JSON format. For example: Create a role-policy.json file with the role policy configuration, in the JSON format. For example: Create a role for your Amazon Web Services account, by using the trust-policy.json file: Embed an inline policy document, by using the role-policy.json file: Additional resources Using high-level (s3) commands with the AWS CLI 11.1.2. Manually uploading an AMI image to AWS by using the CLI You can use RHEL image builder to build ami images and manually upload them directly to Amazon AWS Cloud service provider, by using the CLI. Prerequisites You have an Access Key ID configured in the AWS IAM account manager. You must have a writable S3 bucket prepared. See Creating S3 bucket . You have a defined blueprint. Procedure Using the text editor, create a configuration file with the following content: Replace values in the fields with your credentials for accessKeyID , secretAccessKey , bucket , and region . The IMAGE_KEY value is the name of your VM Image to be uploaded to EC2. Save the file as CONFIGURATION-FILE .toml and close the text editor. Start the compose to upload it to AWS: Replace: blueprint-name with the name of the blueprint you created image-type with the ami image type. image-key with the name of your VM Image to be uploaded to EC2. configuration-file .toml with the name of the configuration file of the cloud provider. Note You must have the correct AWS Identity and Access Management (IAM) settings for the bucket you are going to send your customized image to. You have to set up a policy to your bucket before you are able to upload images to it. Check the status of the image build: After the image upload process is complete, you can see the "FINISHED" status. Verification To confirm that the image upload was successful: Access EC2 on the menu and select the correct region in the AWS console. The image must have the available status, to indicate that it was successfully uploaded. On the dashboard, select your image and click Launch . Additional Resources Required service role to import a VM 11.1.3. Creating and automatically uploading images to the AWS Cloud AMI You can create a (.raw) image by using RHEL image builder, and choose to check the Upload to AWS checkbox to automatically push the output image that you create directly to the Amazon AWS Cloud AMI service provider. Prerequisites You must have root or wheel group user access to the system. You have opened the RHEL image builder interface of the RHEL web console in a browser. You have created a blueprint. See Creating a blueprint in the web console interface . You must have an Access Key ID configured in the AWS IAM account manager. You must have a writable S3 bucket prepared. Procedure In the RHEL image builder dashboard, click the blueprint name that you previously created. Select the tab Images . Click Create Image to create your customized image. The Create Image window opens. From the Type drop-down menu list, select Amazon Machine Image Disk (.raw) . Check the Upload to AWS checkbox to upload your image to the AWS Cloud and click . To authenticate your access to AWS, type your AWS access key ID and AWS secret access key in the corresponding fields. Click . Note You can view your AWS secret access key only when you create a new Access Key ID. If you do not know your Secret Key, generate a new Access Key ID. Type the name of the image in the Image name field, type the Amazon bucket name in the Amazon S3 bucket name field and type the AWS region field for the bucket you are going to add your customized image to. Click . Review the information and click Finish . Optionally, click Back to modify any incorrect detail. Note You must have the correct IAM settings for the bucket you are going to send your customized image. This procedure uses the IAM Import and Export, so you have to set up a policy to your bucket before you are able to upload images to it. For more information, see Required Permissions for IAM Users . A pop-up on the upper right informs you of the saving progress. It also informs that the image creation has been initiated, the progress of this image creation and the subsequent upload to the AWS Cloud. After the process is complete, you can see the Image build complete status. In a browser, access Service->EC2 . On the AWS console dashboard menu, choose the correct region . The image must have the Available status, to indicate that it is uploaded. On the AWS dashboard, select your image and click Launch . A new window opens. Choose an instance type according to the resources you need to start your image. Click Review and Launch . Review your instance start details. You can edit each section if you need to make any changes. Click Launch Before you start the instance, select a public key to access it. You can either use the key pair you already have or you can create a new key pair. Follow the steps to create a new key pair in EC2 and attach it to the new instance. From the drop-down menu list, select Create a new key pair . Enter the name to the new key pair. It generates a new key pair. Click Download Key Pair to save the new key pair on your local system. Then, you can click Launch Instance to start your instance. You can check the status of the instance, which displays as Initializing . After the instance status is running , the Connect button becomes available. Click Connect . A window appears with instructions on how to connect by using SSH. Select A standalone SSH client as the preferred connection method to and open a terminal. In the location you store your private key, ensure that your key is publicly viewable for SSH to work. To do so, run the command: Connect to your instance by using its Public DNS: Type yes to confirm that you want to continue connecting. As a result, you are connected to your instance over SSH. Verification Check if you are able to perform any action while connected to your instance by using SSH. Additional resources Open a case on Red Hat Customer Portal Connecting to your Linux instance by using SSH 11.2. Preparing and uploading VHD images to Microsoft Azure You can create custom images and can update them, either manually or automatically, to the Microsoft Azure cloud with RHEL image builder. 11.2.1. Preparing to manually upload Microsoft Azure VHD images To create a VHD image that you can manually upload to Microsoft Azure cloud, you can use RHEL image builder. Prerequisites You must have a Microsoft Azure resource group and storage account. You have Python installed. The AZ CLI tool depends on python. Procedure Import the Microsoft repository key: Create a local azure-cli.repo repository with the following information. Save the azure-cli.repo repository under /etc/yum.repos.d/ : Install the Microsoft Azure CLI: Note The downloaded version of the Microsoft Azure CLI package can vary depending on the current available version. Run the Microsoft Azure CLI: The terminal shows the following message Note, we have launched a browser for you to login. For old experience with device code, use "az login --use-device-code . Then, the terminal opens a browser with a link to https://microsoft.com/devicelogin from where you can login. Note If you are running a remote (SSH) session, the login page link will not open in the browser. In this case, you can copy the link to a browser and login to authenticate your remote session. To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the device code to authenticate. List the keys for the storage account in Microsoft Azure: Replace resource-group-name with name of your Microsoft Azure resource group and storage-account-name with name of your Microsoft Azure storage account. Note You can list the available resources using the following command: Make note of value key1 in the output of the command. Create a storage container: Replace storage-account-name with name of the storage account. Additional resources Microsoft Azure CLI. 11.2.2. Manually uploading VHD images to Microsoft Azure cloud After you have created your customized VHD image, you can manually upload it to the Microsoft Azure cloud. Prerequisites Your system must be set up for uploading Microsoft Azure VHD images. See Preparing to upload Microsoft Azure VHD images . You must have a Microsoft Azure VHD image created by RHEL image builder. In the GUI, use the Azure Disk Image (.vhd) image type. In the CLI, use the vhd output type. Note When creating a .vhd image using the CLI, image builder writes temporary files to the /var subdirectory. To prevent the .vhd image creation from failing, increase the /var subdirectory capacity to at least 15 to 20 GB free space to ensure availability. Procedure Push the image to Microsoft Azure and create an instance from it: After the upload to the Microsoft Azure Blob storage completes, create a Microsoft Azure image from it: Note Because the images that you create with RHEL image builder generate hybrid images that support to both the V1 = BIOS and V2 = UEFI instances types, you can specify the --hyper-v-generation argument. The default instance type is V1. Verification Create an instance either with the Microsoft Azure portal, or a command similar to the following: Use your private key via SSH to access the resulting instance. Log in as azure-user . This username was set on the step. Additional Resources Composing an image for the .vhd format fails (Red Hat Knowledgebase) 11.2.3. Creating and automatically uploading VHD images to Microsoft Azure cloud You can create .vhd images by using RHEL image builder that will be automatically uploaded to a Blob Storage of the Microsoft Azure Cloud service provider. Prerequisites You have root access to the system. You have access to the RHEL image builder interface of the RHEL web console. You created a blueprint. See Creating a RHEL image builder blueprint in the web console interface . You have a Microsoft Storage Account created. You have a writable Blob Storage prepared. Procedure In the RHEL image builder dashboard, select the blueprint you want to use. Click the Images tab. Click Create Image to create your customized .vhd image. The Create image wizard opens. Select Microsoft Azure (.vhd) from the Type drop-down menu list. Check the Upload to Azure checkbox to upload your image to the Microsoft Azure Cloud. Enter the Image Size and click . On the Upload to Azure page, enter the following information: On the Authentication page, enter: Your Storage account name. You can find it on the Storage account page, in the Microsoft Azure portal . Your Storage access key : You can find it on the Access Key Storage page. Click . On the Authentication page, enter: The image name. The Storage container . It is the blob container to which you will upload the image. Find it under the Blob service section, in the Microsoft Azure portal . Click . On the Review page, click Create . The RHEL image builder and upload processes start. Access the image you pushed into Microsoft Azure Cloud . Access the Microsoft Azure portal . In the search bar, type "storage account" and click Storage accounts from the list. On the search bar, type "Images" and select the first entry under Services . You are redirected to the Image dashboard . On the navigation panel, click Containers . Find the container you created. Inside the container is the .vhd file you created and pushed by using RHEL image builder. Verification Verify that you can create a VM image and launch it. In the search bar, type images account and click Images from the list. Click +Create . From the dropdown list, choose the resource group you used earlier. Enter a name for the image. For the OS type , select Linux . For the VM generation , select Gen 2 . Under Storage Blob , click Browse and click through the storage accounts and container until you reach your VHD file. Click Select at the end of the page. Choose an Account Type, for example, Standard SSD . Click Review + Create and then Create . Wait a few moments for the image creation. To launch the VM, follow the steps: Click Go to resource . Click + Create VM from the menu bar on the header. Enter a name for your virtual machine. Complete the Size and Administrator account sections. Click Review + Create and then Create . You can see the deployment progress. After the deployment finishes, click the virtual machine name to retrieve the public IP address of the instance to connect by using SSH. Open a terminal to create an SSH connection to connect to the VM. Additional resources Microsoft Azure Storage Documentation Create a Microsoft Azure Storage account Open a case on Red Hat Customer Portal Help + support Contacting Red Hat 11.3. Preparing and uploading VMDK custom images to vSphere You can create custom images and can update them, either manually or automatically, to the VMware vSphere cloud with RHEL image builder. 11.3.1. Creating and automatically uploading customized RHEL VMDK images by using image builder With RHEL image builder, you can create customized system images in the Open virtualization format ( .ova ), and automatically upload these images to the VMware vSphere client. The Open virtualization format ( .ova ) is a .vmdk image with additional metadata about the virtual hardware, which contains a minimal template to make it easier to import images into vSphere. The .ovf (Open Virtualization Format) package is part of the vSphere .ova image. After RHEL image builder finishes importing the .ova image to the vSphere client, you can configure it with any additional hardware, such as network, disks, and CD ROMS. You can import the Open virtualization format ( .ova ) image by using either the vSphere GUI or the govc client. To upload the image by using the govc client, see Uploading VMDK images and creating a RHEL virtual machine in vSphere . Prerequisites You opened the RHEL image builder app from the web console in a browser. You created a blueprint. Procedure In the RHEL image builder dashboard, click the Blueprint tab. On the blueprint table, find the blueprint you want to build an image. On the right side of the chosen blueprint, click Create Image . The Create image dialog wizard opens. On the Image output page, complete the following steps: From the Select a blueprint list, select the image type you want. From the Image output type list, select the image output type you want. Optional: Check Upload to VMware checkbox to upload the image directly to VMware. Enter a size for the image. Click . On the Upload to VMware page, enter the following information: Image name : Enter the image name. Host : Enter the VMware vSphere instance URL where the image file will be uploaded. Cluster : Enter the cluster name page to which the image will be uploaded. Datacenter : The data center name to which the image will be uploaded. Datastore : The data store name to which the image will be uploaded. Folder : Folder name to which the image will be uploaded. Click . On the Review page, review the details about the image creation and click Create . The image creation starts, the progress of this image creation and the subsequent upload to the VMware vSphere client. 11.3.2. Uploading VMDK images and creating a RHEL virtual machine in vSphere With RHEL image builder, you can create customized VMware vSphere system images, either in the Open virtualization format ( .ova ) or in the Virtual disk ( .vmdk ) format. You can upload these images to the VMware vSphere client. You can upload the .vmdk or .ova image to VMware vSphere using the govc import.vmdk CLI tool. The vmdk you create contains the cloud-init package installed and you can use it to provision users by using user data, for example. Note Uploading vmdk images by using the VMware vSphere GUI is not supported. Prerequisites You created a blueprint with username and password customizations. You created a VMware vSphere image either in the .ova or .vmdk format by using RHEL image builder and downloaded it to your host system. You installed and configured the govc CLI tool, to be able use the import.vmdk command. Procedure Configure the following values in the user environment with the GOVC environment variables: Navigate to the directory where you downloaded your VMware vSphere image. Launch the VMware vSphere image on vSphere by following the steps: Import the VMware vSphere image in to vSphere: For the .ova format: Create the VM in vSphere without powering it on: For the .ova format, replace the line -firmware=efi -disk=" foldername /composer-api.vmdk" \ with `-firmware=efi -disk=" foldername /composer-api.ova" \ Power-on the VM: Retrieve the VM IP address: Use SSH to log in to the VM, using the username and password you specified in your blueprint: Note If you copied the .vmdk image from your local host to the destination using the govc datastore.upload command, using the resulting image is not supported. There is no option to use the import.vmdk command in the vSphere GUI and as a result, the vSphere GUI does not support the direct upload. As a consequence, the .vmdk image is not usable from the vSphere GUI. 11.3.3. Creating and automatically uploading VMDK images to vSphere using image builder GUI You can build VMware images by using the RHEL image builder GUI tool and automatically push the images directly to your vSphere instance. This avoids the need to download the image file and push it manually. The vmdk you create contains the cloud-init package installed and you can use it to provision users by using user data, for example. To build .vmdk images by using RHEL image builder and push them directly to vSphere instances service provider, follow the steps: Prerequisites You are a member of the root or the weldr group. You have opened link:https://localhost:9090/RHEL image builder in a browser. You have created a blueprint. See Creating a RHEL image builder blueprint in the web console interface . You have a vSphere Account . Procedure For the blueprint you created, click the Images tab . Click Create Image to create your customized image. The Image type window opens. In the Image type window: From the dropdown menu, select the Type: VMware vSphere (.vmdk). Check the Upload to VMware checkbox to upload your image to the vSphere. Optional: Set the size of the image you want to instantiate. The minimal default size is 2 GB. Click . In the Upload to VMware window, under Authentication , enter the following details: Username : username of the vSphere account. Password : password of the vSphere account. In the Upload to VMware window, under Destination , enter the following details about the image upload destination: Image name : a name for the image. Host : The URL of your VMware vSphere. Cluster : The name of the cluster. Data center : The name of the data center. Data store :The name of the Data store. Click . In the Review window, review the details of the image creation and click Finish . You can click Back to modify any incorrect detail. RHEL image builder adds the compose of a RHEL vSphere image to the queue, and creates and uploads the image to the Cluster on the vSphere instance you specified. Note The image build and upload processes take a few minutes to complete. After the process is complete, you can see the Image build complete status. Verification After the image status upload is completed successfully, you can create a Virtual Machine (VM) from the image you uploaded and login into it. To do so: Access VMware vSphere Client. Search for the image in the Cluster on the vSphere instance you specified. Select the image you uploaded. Right-click the selected image. Click New Virtual Machine . A New Virtual Machine window opens. In the New Virtual Machine window, provide the following details: Select New Virtual Machine . Select a name and a folder for your VM. Select a computer resource: choose a destination computer resource for this operation. Select storage: For example, select NFS-Node1 Select compatibility: The image should be BIOS only. Select a guest operating system: For example, select Linux and Red Hat Fedora (64-bit) . Customize hardware : When creating a VM, on the Device Configuration button on the upper right, delete the default New Hard Disk and use the drop-down to select an Existing Hard Disk disk image: Ready to complete: Review the details and click Finish to create the image. Navigate to the VMs tab. From the list, select the VM you created. Click the Start button from the panel. A new window appears, showing the VM image loading. Log in with the credentials you created for the blueprint. You can verify if the packages you added to the blueprint are installed. For example: Additional resources Introduction to vSphere Installation and Setup 11.4. Preparing and uploading custom GCE images to GCP You can create custom images and then automatically update them to the Oracle Cloud Infrastructure (OCI) instance with RHEL image builder. 11.4.1. Uploading images to GCP with RHEL image builder With RHEL image builder, you can build a gce image, provide credentials for your user or GCP service account, and then upload the gce image directly to the GCP environment. 11.4.1.1. Configuring and uploading a gce image to GCP by using the CLI Set up a configuration file with credentials to upload your gce image to GCP by using the RHEL image builder CLI. Warning You cannot manually import gce image to GCP, because the image will not boot. You must use either gcloud or RHEL image builder to upload it. Prerequisites You have a valid Google account and credentials to upload your image to GCP. The credentials can be from a user account or a service account. The account associated with the credentials must have at least the following IAM roles assigned: roles/storage.admin - to create and delete storage objects roles/compute.storageAdmin - to import a VM image to Compute Engine. You have an existing GCP bucket. Procedure Use a text editor to create a gcp-config.toml configuration file with the following content: GCP_BUCKET points to an existing bucket. It is used to store the intermediate storage object of the image which is being uploaded. GCP_STORAGE_REGION is both a regular Google storage region and a dual or multi region. OBJECT_KEY is the name of an intermediate storage object. It must not exist before the upload, and it is deleted when the upload process is done. If the object name does not end with .tar.gz , the extension is automatically added to the object name. GCP_CREDENTIALS is a Base64 -encoded scheme of the credentials JSON file downloaded from GCP. The credentials determine which project the GCP uploads the image to. Note Specifying GCP_CREDENTIALS in the gcp-config.toml file is optional if you use a different mechanism to authenticate with GCP. For other authentication methods, see Authenticating with GCP . Retrieve the GCP_CREDENTIALS from the JSON file downloaded from GCP. Create a compose with an additional image name and cloud provider profile: The image build, upload, and cloud registration processes can take up to ten minutes to complete. Verification Verify that the image status is FINISHED: Additional resources Identity and Access Management Create storage buckets 11.4.1.2. How RHEL image builder sorts the authentication order of different GCP credentials You can use several different types of credentials with RHEL image builder to authenticate with GCP. If RHEL image builder configuration is set to authenticate with GCP using multiple sets of credentials, it uses the credentials in the following order of preference: Credentials specified with the composer-cli command in the configuration file. Credentials configured in the osbuild-composer worker configuration. Application Default Credentials from the Google GCP SDK library, which tries to automatically find a way to authenticate by using the following options: If the GOOGLE_APPLICATION_CREDENTIALS environment variable is set, Application Default Credentials tries to load and use credentials from the file pointed to by the variable. Application Default Credentials tries to authenticate by using the service account attached to the resource that is running the code. For example, Google Compute Engine VM. Note You must use the GCP credentials to determine which GCP project to upload the image to. Therefore, unless you want to upload all of your images to the same GCP project, you always must specify the credentials in the gcp-config.toml configuration file with the composer-cli command. 11.4.1.2.1. Specifying GCP credentials with the composer-cli command You can specify GCP authentication credentials in the upload target configuration gcp-config.toml file. Use a Base64 -encoded scheme of the Google account credentials JSON file to save time. Procedure Get the encoded content of the Google account credentials file with the path stored in GOOGLE_APPLICATION_CREDENTIALS environment variable, by running the following command: In the upload target configuration gcp-config.toml file, set the credentials: 11.4.1.2.2. Specifying credentials in the osbuild-composer worker configuration You can configure GCP authentication credentials to be used for GCP globally for all image builds. This way, if you want to import images to the same GCP project, you can use the same credentials for all image uploads to GCP. Procedure In the /etc/osbuild-worker/osbuild-worker.toml worker configuration, set the following credential value: 11.5. Preparing and uploading custom images directly to OCI You can create custom images and then automatically update them to the Oracle Cloud Infrastructure (OCI) instance with RHEL image builder. 11.5.1. Creating and automatically uploading custom images to OCI With RHEL image builder, build customized images and automatically push them directly to your Oracle Cloud Infrastructure (OCI) instance. Then, you can start an image instance from the OCI dashboard. Prerequisites You have root or weldr group user access to the system. You have an Oracle Cloud account. You must be granted security access in an OCI policy by your administrator. You have created an OCI Bucket in the OCI_REGION of your choice. Procedure Open the RHEL image builder interface of the web console in a browser. Click Create blueprint . The Create blueprint wizard opens. On the Details page, enter a name for the blueprint, and optionally, a description. Click . On the Packages page, select the components and packages that you want to include in the image. Click . On the Customizations page, configure the customizations that you want for your blueprint. Click . On the Review page, click Create . To create an image, click Create Image . The Create image wizard opens. On the Image output page, complete the following steps: From the "Select a blueprint" drop-down menu, select the blueprint you want. From the "Image output type" drop-down menu, select Oracle Cloud Infrastructure (.qcow2) . Check the "Upload OCI checkbox to upload your image to the OCI. Enter the "image size" . Click . On the Upload to OCI - Authentication page, enter the following mandatory details: User OCID: you can find it in the Console on the page showing the user's details. Private key On the Upload to OCI - Destination page, enter the following mandatory details and click . Image name: a name for the image to be uploaded. OCI bucket Bucket namespace Bucket region Bucket compartment Bucket tenancy Review the details in the wizard and click Finish . RHEL image builder adds the compose of a RHEL .qcow2 image to the queue. Verification Access the OCI dashboard Custom Images. Select the Compartment you specified for the image and locate the image in the Import image table. Click the image name and verify the image information. Additional resources Managing custom images in the OCI. Managing buckets in the OCI. Generating SSH keys. 11.6. Preparing and uploading customized QCOW2 images directly to OpenStack You can create custom .qcow2 images with RHEL image builder, and manually upload them to the OpenStack cloud deployments. 11.6.1. Uploading QCOW2 images to OpenStack With the RHEL image builder tool, you can create customized .qcow2 images that are suitable for uploading to OpenStack cloud deployments, and starting instances there. RHEL image builder creates images in the QCOW2 format, but with further changes specific to OpenStack. Warning Do not mistake the generic QCOW2 image type output format you create by using RHEL image builder with the OpenStack image type, which is also in the QCOW2 format, but contains further changes specific to OpenStack. Prerequisites You have created a blueprint. Procedure Start the compose of a QCOW2 image. Check the status of the building. After the image build finishes, you can download the image. Download the QCOW2 image: Access the OpenStack dashboard and click +Create Image . On the left menu, select the Admin tab. From the System Panel , click Image . The Create An Image wizard opens. In the Create An Image wizard: Enter a name for the image Click Browse to upload the QCOW2 image. From the Format dropdown list, select the QCOW2 - QEMU Emulator . Click Create Image . On the left menu, select the Project tab. From the Compute menu, select Instances . Click the Launch Instance button. The Launch Instance wizard opens. On the Details page, enter a name for the instance. Click . On the Source page, select the name of the image you uploaded. Click . On the Flavor page, select the machine resources that best fit your needs. Click Launch . You can run the image instance using any mechanism (CLI or OpenStack web UI) from the image. Use your private key via SSH to access the resulting instance. Log in as cloud-user . 11.7. Preparing and uploading customized RHEL images to the Alibaba Cloud You can upload a customized .ami images that you created by using RHEL image builder to the Alibaba Cloud. 11.7.1. Preparing to upload customized RHEL images to Alibaba Cloud To deploy a customized RHEL image to the Alibaba Cloud, first you need to verify the customized image. The image needs a specific configuration to boot successfully, because Alibaba Cloud requests the custom images to meet certain requirements before you use it. Note RHEL image builder generates images that conform to Alibaba's requirements. However, Red Hat recommends also using the Alibaba image_check tool to verify the format compliance of your image. Prerequisites You must have created an Alibaba image by using RHEL image builder. Procedure Connect to the system containing the image that you want to check by using the Alibaba image_check tool. Download the image_check tool: Change the file permission of the image compliance tool: Run the command to start the image compliance tool checkup: The tool verifies the system configuration and generates a report that is displayed on your screen. The image_check tool saves this report in the same folder where the image compliance tool is running. Troubleshooting If any of the Detection Items fail, follow the instructions in the terminal to correct it. Additional resources Image Compliance Tool. 11.7.2. Uploading customized RHEL images to Alibaba You can upload a customized AMI image you created by using RHEL image builder to the Object Storage Service (OSS). Prerequisites Your system is set up for uploading Alibaba images. See Preparing for uploading images to Alibaba . You have created an ami image by using RHEL image builder. You have a bucket. See Creating a bucket . You have an active Alibaba Account . You activated OSS . Procedure Log in to the OSS console . In the Bucket menu on the left, select the bucket to which you want to upload an image. In the upper right menu, click the Files tab. Click Upload . A dialog window opens on the right side. Configure the following: Upload To : Choose to upload the file to the Current directory or to a Specified directory. File ACL : Choose the type of permission of the uploaded file. Click Upload . Select the image you want to upload to the OSS Console.. Click Open . Additional resources Upload an object. Creating an instance from custom images. Importing images. 11.7.3. Importing images to Alibaba Cloud To import a customized Alibaba RHEL image that you created by using RHEL image builder to the Elastic Compute Service (ECS), follow the steps: Prerequisites Your system is set up for uploading Alibaba images. See Preparing for uploading images to Alibaba . You have created an ami image by using RHEL image builder. You have a bucket. See Creating a bucket . You have an active Alibaba Account . You activated OSS . You have uploaded the image to Object Storage Service (OSS). See Uploading images to Alibaba . Procedure Log in to the ECS console. On the left-side menu, click Images . On the upper right side, click Import Image . A dialog window opens. Confirm that you have set up the correct region where the image is located. Enter the following information: OSS Object Address : See how to obtain OSS Object Address . Image Name Operating System System Disk Size System Architecture Platform : Red Hat Optional: Provide the following details: Image Format : qcow2 or ami , depending on the uploaded image format. Image Description Add Images of Data Disks The address can be determined in the OSS management console. After selecting the required bucket in the left menu: Select Files section. Click the Details link on the right for the appropriate image. A window appears on the right side of the screen, showing image details. The OSS object address is in the URL box. Click OK . Note The importing process time can vary depending on the image size. The customized image is imported to the ECS Console. Additional resources Notes for importing images. Creating an instance from custom images. Upload an object. 11.7.4. Creating an instance of a customized RHEL image using Alibaba Cloud You can create instances of a customized RHEL image by using the Alibaba ECS Console. Prerequisites You have activated OSS and uploaded your custom image. You have successfully imported your image to ECS Console. See Importing images to Alibaba . Procedure Log in to the ECS console. On the left-side menu, select Instances . In the upper-right corner, click Create Instance . You are redirected to a new window. Complete all the required information. See Creating an instance by using the wizard for more details. Click Create Instance and confirm the order. Note You can see the option Create Order instead of Create Instance , depending on your subscription. As a result, you have an active instance ready for deployment from the Alibaba ECS Console . Additional resources Creating an instance by using a custom image. Create an instance by using the wizard. | [
"dnf install python3 python3-pip",
"pip3 install awscli",
"aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:",
"BUCKET= bucketname aws s3 mb s3://USDBUCKET",
"{ \"Version\": \"2022-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\": { \"sts:Externalid\": \"vmimport\" } } }] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Action\": [\"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\"], \"Resource\": [\"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/ \"] }, { \"Effect\": \"Allow\", \"Action\": [\"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe \"], \"Resource\": \"*\" }] } USDBUCKET USDBUCKET",
"aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json",
"aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json",
"provider = \"aws\" [settings] accessKeyID = \" AWS_ACCESS_KEY_ID \" secretAccessKey = \"AWS_SECRET_ACCESS_KEY\" bucket = \"AWS_BUCKET\" region = \"AWS_REGION\" key = \"IMAGE_KEY\"",
"composer-cli compose start blueprint-name image-type image-key configuration-file .toml",
"composer-cli compose status",
"chmod 400 <_your-instance-name.pem_>",
"ssh -i <_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>",
"rpm --import https://packages.microsoft.com/keys/microsoft.asc",
"[azure-cli] name=Azure CLI baseurl=https://packages.microsoft.com/yumrepos/vscode enabled=1 gpgcheck=1 gpgkey=https://packages.microsoft.com/keys/microsoft.asc",
"dnfdownloader azure-cli rpm -ivh --nodeps azure-cli-2.0.64-1.el7.x86_64.rpm",
"az login",
"az storage account keys list --resource-group <resource_group_name> --account-name <storage_account_name>",
"az resource list",
"az storage container create --account-name <storage_account_name> --account-key <key1_value> --name <storage_account_name>",
"az storage blob upload --account-name <_account_name_> --container-name <_container_name_> --file <_image_-disk.vhd> --name <_image_-disk.vhd> --type page",
"az image create --resource-group <_resource_group_name_> --name <_image_>-disk.vhd --os-type linux --location <_location_> --source https://USD<_account_name_>.blob.core.windows.net/<_container_name_>/<_image_>-disk.vhd - Running",
"az vm create --resource-group <_resource_group_name_> --location <_location_> --name <_vm_name_> --image <_image_>-disk.vhd --admin-username azure-user --generate-ssh-keys - Running",
"GOVC_URL GOVC_DATACENTER GOVC_FOLDER GOVC_DATASTORE GOVC_RESOURCE_POOL GOVC_NETWORK",
"govc import.vmdk ./composer-api.vmdk foldername",
"govc import.ova ./composer-api.ova foldername",
"govc vm.create -net.adapter=vmxnet3 -m=4096 -c=2 -g=rhel8_64Guest -firmware=efi -disk=\" foldername /composer-api.vmdk\" -disk.controller=scsi -on=false vmname",
"govc vm.power -on vmname",
"govc vm.ip vmname",
"ssh admin@<_ip_address_of_the_vm_>",
"rpm -qa | grep firefox",
"provider = \"gcp\" [settings] bucket = \"GCP_BUCKET\" region = \"GCP_STORAGE_REGION\" object = \"OBJECT_KEY\" credentials = \"GCP_CREDENTIALS\"",
"sudo base64 -w 0 cee-gcp-nasa-476a1fa485b7.json",
"sudo composer-cli compose start BLUEPRINT-NAME gce IMAGE_KEY gcp-config.toml",
"sudo composer-cli compose status",
"base64 -w 0 \"USD{GOOGLE_APPLICATION_CREDENTIALS}\"",
"provider = \"gcp\" [settings] provider = \"gcp\" [settings] credentials = \"GCP_CREDENTIALS\"",
"[gcp] credentials = \" PATH_TO_GCP_ACCOUNT_CREDENTIALS \"",
"composer-cli compose start blueprint_name openstack",
"composer-cli compose status",
"composer-cli compose image UUID",
"curl -O https://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/attach/73848/cn_zh/1557459863884/image_check",
"chmod +x image_check",
"./image_check"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_a_customized_rhel_system_image/creating-cloud-images-with-composer_composing-a-customized-rhel-system-image |
14.5.12. Displaying the Block Device Size | 14.5.12. Displaying the Block Device Size In this case a block device is the unique target name (<target dev='name'/>) or a source file (< source file ='name'/>). To retrieve a list you can run domblklist . This domblkinfo requires a domain name. | [
"virsh domblkinfo rhel6"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-displaying_the_block_device_size |
3.7. Red Hat OpenStack Platform 16 for RHEL 8 for Power, little endian (RPMs) | 3.7. Red Hat OpenStack Platform 16 for RHEL 8 for Power, little endian (RPMs) The following table outlines the packages included in the openstack-16-for-rhel-8-ppc64le-rpms repository. Table 3.7. Red Hat OpenStack Platform 16 for RHEL 8 for Power, little endian (RPMs) Packages Name Version Advisory XStatic-Angular-common 1.5.8.0-6.el8ost RHEA-2020:0283 XStatic-Magic-Search-common 0.2.5.1-8.el8ost RHEA-2020:0283 ansible-config_template 1.0.1-0.20191122040234.ff61269.el8ost RHEA-2020:0283 ansible-pacemaker 1.0.4-0.20191022042340.0e4d7c0.el8ost RHEA-2020:0283 ansible-role-atos-hsm 0.1.1-0.20191024165047.866e075.el8ost RHEA-2020:0283 ansible-role-chrony 1.0.2-0.20191022052427.03e7fbe.el8ost RHEA-2020:0283 ansible-role-container-registry 1.1.1-0.20191025041237.bf2e310.el8ost RHEA-2020:0283 ansible-role-network-runner 0.1.7-3.el8ost RHEA-2020:0283 ansible-role-openstack-ml2 3.0.1-0.20191127205129.e24d01c.1.el8ost RHEA-2020:0283 ansible-role-openstack-operations 0.0.1-0.20191022044056.29cc537.el8ost RHEA-2020:0283 ansible-role-redhat-subscription 1.0.5-0.20191022053336.6c67a40.el8ost RHEA-2020:0283 ansible-role-thales-hsm 0.2.1-0.20191024165911.2803c6c.el8ost RHEA-2020:0283 ansible-role-tripleo-modify-image 1.1.1-0.20200122200932.58d7a5b.el8ost RHEA-2020:0283 ansible-tripleo-ipsec 9.2.0-0.20191022054642.ffe104c.el8ost RHEA-2020:0283 blosc 1.13.3-3.el8ost RHEA-2020:0283 bootswatch-common 3.3.7.0-7.el8ost RHEA-2020:0283 bootswatch-fonts 3.3.7.0-7.el8ost RHEA-2020:0283 collectd 5.8.1-8.el8ost RHEA-2020:0283 collectd-amqp1 5.8.1-8.el8ost RHEA-2020:0283 collectd-connectivity 5.8.1-8.el8ost RHEA-2020:0283 collectd-dbi 5.8.1-8.el8ost RHEA-2020:0283 collectd-hugepages 5.8.1-8.el8ost RHEA-2020:0283 collectd-ovs-events 5.8.1-8.el8ost RHEA-2020:0283 collectd-ovs-stats 5.8.1-8.el8ost RHEA-2020:0283 collectd-ping 5.8.1-8.el8ost RHEA-2020:0283 collectd-procevent 5.8.1-8.el8ost RHEA-2020:0283 collectd-python 5.8.1-8.el8ost RHEA-2020:0283 collectd-sensubility 0.1.4-1.el8ost RHEA-2020:0283 collectd-snmp-agent 5.8.1-8.el8ost RHEA-2020:0283 collectd-sysevent 5.8.1-8.el8ost RHEA-2020:0283 cpp-hocon 0.1.8-2.el8ost RHEA-2020:0283 crudini 0.9-6.el8ost RHEA-2020:0283 dib-utils 0.0.11-0.20191025164319.51661c3.el8ost RHEA-2020:0283 dibbler-client 1.0.1-9.el8ost RHEA-2020:0283 dibbler-relay 1.0.1-9.el8ost RHEA-2020:0283 dibbler-requestor 1.0.1-9.el8ost RHEA-2020:0283 dibbler-server 1.0.1-9.el8ost RHEA-2020:0283 diskimage-builder 2.29.2-0.20191114133446.76389d0.el8ost RHEA-2020:0283 dumb-init 1.1.3-18.el8ost RHEA-2020:0283 elixir 1.9.1-1.el8ost RHEA-2020:0283 erlang-asn1 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-compiler 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-crypto 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-eldap 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-erts 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-hipe 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-inets 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-kernel 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-mnesia 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-os_mon 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-otp_mibs 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-parsetools 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-public_key 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-runtime_tools 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-sasl 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-sd_notify 1.0-7.el8ost RHEA-2020:0283 erlang-snmp 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-ssl 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-stdlib 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-syntax_tools 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-tools 21.3.8.7-1.el8ost RHEA-2020:0283 erlang-xmerl 21.3.8.7-1.el8ost RHEA-2020:0283 etcd 3.2.21-2.el8ost RHEA-2020:0283 facter 3.9.3-9.el8ost RHEA-2020:0283 fontawesome-fonts 4.7.0-6.el8ost RHEA-2020:0283 fontawesome-fonts-web 4.7.0-6.el8ost RHEA-2020:0283 gnocchi-api 4.3.5-0.20191008121313.8cbc9ee.el8ost RHEA-2020:0283 gnocchi-common 4.3.5-0.20191008121313.8cbc9ee.el8ost RHEA-2020:0283 gnocchi-metricd 4.3.5-0.20191008121313.8cbc9ee.el8ost RHEA-2020:0283 gnocchi-statsd 4.3.5-0.20191008121313.8cbc9ee.el8ost RHEA-2020:0283 golang-github-BurntSushi-toml-devel 0-0.10.git2ceedfe.1.el8ost RHEA-2020:0283 golang-github-Sirupsen-logrus-devel 1.1.1-4.el8ost RHEA-2020:0283 golang-github-davecgh-go-spew-devel 0-0.11.git6d21280.1.el8ost RHEA-2020:0283 golang-github-golang-sys-devel 0-0.15.20181125git62eef0e.1.el8ost RHEA-2020:0283 golang-github-pmezard-go-difflib-devel 0-0.9.git792786c.1.el8ost RHEA-2020:0283 golang-github-stretchr-objx-devel 0-0.12.git1a9d0bb.1.el8ost RHEA-2020:0283 golang-github-stretchr-testify-devel 1.2.2-3.el8ost RHEA-2020:0283 golang-github-urfave-cli-devel 1.20.0-3.el8ost RHEA-2020:0283 golang-github-vbatts-tar-split 0.11.1-3.el8ost RHEA-2020:0283 golang-golangorg-crypto-devel 0-0.14.20181125git3d3f9f4.1.el8ost RHEA-2020:0283 golang-gopkg-check-devel 1-14.el8ost RHEA-2020:0283 golang-gopkg-yaml-devel 1-16.el8ost RHEA-2020:0283 golang-gopkg-yaml-devel-v2 1-16.el8ost RHEA-2020:0283 hdf5 1.8.20-7.el8ost RHEA-2020:0283 heat-cfntools 1.4.2-6.el8ost RHEA-2020:0283 hiera 3.3.1-5.el8ost RHEA-2020:0283 kuryr-binding-scripts 1.1.1-0.20190923160834.41e6964.el8ost RHEA-2020:0283 leatherman 1.4.5-4.el8ost RHEA-2020:0283 libdbi 0.9.0-10.el8ost RHEA-2020:0283 liberasurecode 1.5.0-5.el8ost RHEA-2020:0283 liboping 1.10.0-10.el8ost RHEA-2020:0283 libqhull 2015.2-7.el8ost RHEA-2020:0283 libsodium 1.0.16-4.5.el8ost RHEA-2020:0283 libwebsockets 2.4.2-2.el8 RHEA-2020:0283 libzstd 1.3.4-2.el8ost RHEA-2020:0283 mdi-common 1.4.57.0-9.el8ost RHEA-2020:0283 mdi-fonts 1.4.57.0-9.el8ost RHEA-2020:0283 ndisc6 1.0.3-9.el8ost RHEA-2020:0283 novnc 1.1.0-1.el8ost RHEA-2020:0283 octavia-amphora-image-x86_64 16.0-20200130.1.el8ost RHEA-2020:0428 openstack-aodh-api 9.0.1-0.20191017053848.9e17daa.el8ost RHEA-2020:0283 openstack-aodh-common 9.0.1-0.20191017053848.9e17daa.el8ost RHEA-2020:0283 openstack-aodh-compat 9.0.1-0.20191017053848.9e17daa.el8ost RHEA-2020:0283 openstack-aodh-evaluator 9.0.1-0.20191017053848.9e17daa.el8ost RHEA-2020:0283 openstack-aodh-expirer 9.0.1-0.20191017053848.9e17daa.el8ost RHEA-2020:0283 openstack-aodh-listener 9.0.1-0.20191017053848.9e17daa.el8ost RHEA-2020:0283 openstack-aodh-notifier 9.0.1-0.20191017053848.9e17daa.el8ost RHEA-2020:0283 openstack-barbican 9.0.1-0.20191017050432.3211586.el8ost RHEA-2020:0283 openstack-barbican-api 9.0.1-0.20191017050432.3211586.el8ost RHEA-2020:0283 openstack-barbican-common 9.0.1-0.20191017050432.3211586.el8ost RHEA-2020:0283 openstack-barbican-keystone-listener 9.0.1-0.20191017050432.3211586.el8ost RHEA-2020:0283 openstack-barbican-worker 9.0.1-0.20191017050432.3211586.el8ost RHEA-2020:0283 openstack-ceilometer-central 13.0.1-0.20191129200352.306a818.el8ost RHEA-2020:0283 openstack-ceilometer-common 13.0.1-0.20191129200352.306a818.el8ost RHEA-2020:0283 openstack-ceilometer-compute 13.0.1-0.20191129200352.306a818.el8ost RHEA-2020:0283 openstack-ceilometer-ipmi 13.0.1-0.20191129200352.306a818.el8ost RHEA-2020:0283 openstack-ceilometer-notification 13.0.1-0.20191129200352.306a818.el8ost RHEA-2020:0283 openstack-ceilometer-polling 13.0.1-0.20191129200352.306a818.el8ost RHEA-2020:0283 openstack-cinder 15.0.2-0.20200123220928.900f769.el8ost RHEA-2020:0283 openstack-dashboard 16.0.1-0.20191122201406.897fa1b.el8ost RHEA-2020:0283 openstack-dashboard-theme 16.0.1-1.el8ost RHEA-2020:0283 openstack-ec2-api 9.0.1-0.20191018040001.fb16aa4.el8ost RHEA-2020:0283 openstack-glance 19.0.1-0.20191030153439.6e3ced8.el8ost RHEA-2020:0283 openstack-heat-agents 1.10.1-0.20191022061131.96b819c.el8ost RHEA-2020:0283 openstack-heat-api 13.0.1-0.20191127204014.0703ca7.el8ost RHEA-2020:0283 openstack-heat-api-cfn 13.0.1-0.20191127204014.0703ca7.el8ost RHEA-2020:0283 openstack-heat-common 13.0.1-0.20191127204014.0703ca7.el8ost RHEA-2020:0283 openstack-heat-engine 13.0.1-0.20191127204014.0703ca7.el8ost RHEA-2020:0283 openstack-heat-monolith 13.0.1-0.20191127204014.0703ca7.el8ost RHEA-2020:0283 openstack-heat-ui 2.0.1-0.20191017045631.93076a6.el8ost RHEA-2020:0283 openstack-ironic-api 13.0.3-0.20191202201522.cfa58af.el8ost RHEA-2020:0283 openstack-ironic-common 13.0.3-0.20191202201522.cfa58af.el8ost RHEA-2020:0283 openstack-ironic-conductor 13.0.3-0.20191202201522.cfa58af.el8ost RHEA-2020:0283 openstack-ironic-inspector 9.2.1-0.20191029113430.7beeeaa.el8ost RHEA-2020:0283 openstack-ironic-inspector-dnsmasq 9.2.1-0.20191029113430.7beeeaa.el8ost RHEA-2020:0283 openstack-ironic-python-agent 5.0.1-0.20191125202513.9681fd7.el8ost RHEA-2020:0283 openstack-ironic-python-agent-builder 1.1.1-0.20191203040321.a34dfda.el8ost RHEA-2020:0283 openstack-ironic-staging-drivers 0.12.1-0.20191014171135.bcedf43.el8ost RHEA-2020:0283 openstack-ironic-ui 3.5.3-0.20191014155023.57c2b37.el8ost RHEA-2020:0283 openstack-keystone 16.0.1-0.20191210095025.bd3f637.el8ost RHEA-2020:0283 openstack-manila 9.1.0-0.20191112165911.9da5594.el8ost RHEA-2020:0283 openstack-manila-share 9.1.0-0.20191112165911.9da5594.el8ost RHEA-2020:0283 openstack-manila-ui 2.19.1-0.20191203200315.7f0c6ad.el8ost RHEA-2020:0283 openstack-mistral-all 9.0.2-0.20191125120837.6651519.el8ost RHEA-2020:0283 openstack-mistral-api 9.0.2-0.20191125120837.6651519.el8ost RHEA-2020:0283 openstack-mistral-common 9.0.2-0.20191125120837.6651519.el8ost RHEA-2020:0283 openstack-mistral-engine 9.0.2-0.20191125120837.6651519.el8ost RHEA-2020:0283 openstack-mistral-event-engine 9.0.2-0.20191125120837.6651519.el8ost RHEA-2020:0283 openstack-mistral-executor 9.0.2-0.20191125120837.6651519.el8ost RHEA-2020:0283 openstack-mistral-notifier 9.0.2-0.20191125120837.6651519.el8ost RHEA-2020:0283 openstack-neutron 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 openstack-neutron-bgp-dragent 15.0.1-0.20191017043427.c2cf1cd.el8ost RHEA-2020:0283 openstack-neutron-bigswitch-agent 12.0.3-0.20191121040236.f4c1e96.el8ost RHEA-2020:0283 openstack-neutron-common 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 openstack-neutron-dynamic-routing-common 15.0.1-0.20191017043427.c2cf1cd.el8ost RHEA-2020:0283 openstack-neutron-l2gw-agent 15.0.1-0.20191014184513.0de2604.el8ost RHEA-2020:0283 openstack-neutron-linuxbridge 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 openstack-neutron-macvtap-agent 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 openstack-neutron-metering-agent 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 openstack-neutron-ml2 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 openstack-neutron-openvswitch 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 openstack-neutron-rpc-server 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 openstack-neutron-sriov-nic-agent 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 openstack-nova 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-api 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-common 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-compute 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-conductor 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-console 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-migration 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-novncproxy 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-scheduler 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-serialproxy 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-nova-spicehtml5proxy 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 openstack-octavia-amphora-agent 5.0.1-0.20191128120414.1920286.el8ost RHEA-2020:0283 openstack-octavia-api 5.0.1-0.20191128120414.1920286.el8ost RHEA-2020:0283 openstack-octavia-common 5.0.1-0.20191128120414.1920286.el8ost RHEA-2020:0283 openstack-octavia-diskimage-create 5.0.1-0.20191128120414.1920286.el8ost RHEA-2020:0283 openstack-octavia-health-manager 5.0.1-0.20191128120414.1920286.el8ost RHEA-2020:0283 openstack-octavia-housekeeping 5.0.1-0.20191128120414.1920286.el8ost RHEA-2020:0283 openstack-octavia-ui 4.0.1-0.20191030103432.7eb401a.el8ost RHEA-2020:0283 openstack-octavia-worker 5.0.1-0.20191128120414.1920286.el8ost RHEA-2020:0283 openstack-panko-api 7.0.1-0.20191017041322.9b551e7.el8ost RHEA-2020:0283 openstack-panko-common 7.0.1-0.20191017041322.9b551e7.el8ost RHEA-2020:0283 openstack-placement-api 2.0.0-0.20191017052028.36beddb.el8ost RHEA-2020:0283 openstack-placement-common 2.0.0-0.20191017052028.36beddb.el8ost RHEA-2020:0283 openstack-selinux 0.8.20-0.20191202205815.09846a2.el8ost RHEA-2020:0283 openstack-swift-account 2.23.2-0.20191105131958.e1e1f4d.el8ost RHEA-2020:0283 openstack-swift-container 2.23.2-0.20191105131958.e1e1f4d.el8ost RHEA-2020:0283 openstack-swift-object 2.23.2-0.20191105131958.e1e1f4d.el8ost RHEA-2020:0283 openstack-swift-proxy 2.23.2-0.20191105131958.e1e1f4d.el8ost RHEA-2020:0283 openstack-tempest 22.1.0-0.20191017061849.c933452.el8ost RHEA-2020:0283 openstack-tempest-all 22.1.0-0.20191017061849.c933452.el8ost RHEA-2020:0283 openstack-tripleo-common 11.3.3-0.20200121231250.3c68b48.el8ost RHEA-2020:0283 openstack-tripleo-common-container-base 11.3.3-0.20200121231250.3c68b48.el8ost RHEA-2020:0283 openstack-tripleo-common-containers 11.3.3-0.20200121231250.3c68b48.el8ost RHEA-2020:0283 openstack-tripleo-common-devtools 11.3.3-0.20200121231250.3c68b48.el8ost RHEA-2020:0283 openstack-tripleo-heat-templates 11.3.2-0.20200131125640.cc909b6.el8ost RHEA-2020:0283 openstack-tripleo-heat-templates-compat 10.6.2-0.20191202213319.41d9f8a.el8ost RHEA-2020:0283 openstack-tripleo-image-elements 10.6.1-0.20191022065313.7338463.el8ost RHEA-2020:0283 openstack-tripleo-puppet-elements 11.2.2-0.20200128210949.d668f88.el8ost RHEA-2020:0283 openstack-tripleo-validations 11.3.1-0.20191126041901.2bba53a.el8ost RHEA-2020:0283 openstack-zaqar 9.0.1-0.20191031121226.15a8ad7.el8ost RHEA-2020:0283 os-apply-config 10.5.1-0.20191022070316.3c6e3cd.el8ost RHEA-2020:0283 os-collect-config 10.5.1-0.20191022072455.0f985b9.el8ost RHEA-2020:0283 os-net-config 11.3.1-0.20191202211303.d814f68.el8ost RHEA-2020:0283 os-refresh-config 10.4.1-0.20191022071520.b5e8cc4.el8ost RHEA-2020:0283 paunch-services 5.3.1-0.20191214120154.b2f3c3c.el8ost RHEA-2020:0283 plotnetcfg 0.4.1-12.el8ost RHEA-2020:0283 puppet 5.5.10-5.el8ost RHEA-2020:0283 puppet-aodh 15.4.1-0.20191014131853.8beb87a.el8ost RHEA-2020:0283 puppet-apache 5.1.0-0.20191004100623.1fa9b1c.el8ost RHEA-2020:0283 puppet-archive 4.2.1-0.20191004094640.0538163.el8ost RHEA-2020:0283 puppet-auditd 2.2.1-0.20191002163110.189b22b.el8ost RHEA-2020:0283 puppet-barbican 15.4.1-0.20191014134242.00a75aa.el8ost RHEA-2020:0283 puppet-cassandra 2.7.4-0.20191004150835.9954256.el8ost RHEA-2020:0283 puppet-ceilometer 15.4.1-0.20191014134249.de7b6a1.el8ost RHEA-2020:0283 puppet-ceph 3.0.1-0.20191002213425.55a0f94.el8ost RHEA-2020:0283 puppet-certmonger 2.4.0-0.20191002174506.a198870.el8ost RHEA-2020:0283 puppet-cinder 15.4.1-0.20191028225029.f4aa29e.el8ost RHEA-2020:0283 puppet-collectd 11.0.1-0.20191004151223.785a71b.el8ost RHEA-2020:0283 puppet-concat 6.1.0-0.20191004093433.9baa8fc.el8ost RHEA-2020:0283 puppet-contrail 1.0.1-0.20191002163109.6f87929.el8ost RHEA-2020:0283 puppet-corosync 6.0.2-0.20191004103159.961add3.el8ost RHEA-2020:0283 puppet-datacat 0.6.3-0.20191002155944.5cce8f2.el8ost RHEA-2020:0283 puppet-designate 15.5.1-0.20191014152945.4fc5214.el8ost RHEA-2020:0283 puppet-dns 6.2.1-0.20191004115034.2ae1cd7.el8ost RHEA-2020:0283 puppet-ec2api 15.4.1-0.20191014133252.e38e26c.el8ost RHEA-2020:0283 puppet-elasticsearch 6.4.1-0.20191004142247.725afd6.el8ost RHEA-2020:0283 puppet-etcd 1.12.3-0.20191022053823.123d2af.el8ost RHEA-2020:0283 puppet-fdio 18.2-0.20191002164401.6fd1c8e.el8ost RHEA-2020:0283 puppet-firewall 2.1.1-0.20191004122614.4f4437a.el8ost RHEA-2020:0283 puppet-fluentd 0.10.0-0.20191002160933.4dfc15a.el8ost RHEA-2020:0283 puppet-git 0.5.0-0.20191002154952.4e4498e.el8ost RHEA-2020:0283 puppet-glance 15.4.1-0.20191127040410.69c10c4.el8ost RHEA-2020:0283 puppet-gnocchi 15.4.1-0.20191017040010.b462302.el8ost RHEA-2020:0283 puppet-haproxy 4.1.1-0.20191004123506.df96ffc.el8ost RHEA-2020:0283 puppet-headless 5.5.10-5.el8ost RHEA-2020:0283 puppet-heat 15.4.1-0.20191014145528.68b7e29.el8ost RHEA-2020:0283 puppet-horizon 15.4.1-0.20191014143128.620e8f6.el8ost RHEA-2020:0283 puppet-inifile 3.1.0-0.20191004095713.91efced.el8ost RHEA-2020:0283 puppet-ipaclient 2.5.2-0.20191002160932.b086731.el8ost RHEA-2020:0283 puppet-ironic 15.4.1-0.20191022165413.8fe6978.el8ost RHEA-2020:0283 puppet-java 5.0.1-0.20191004145038.e57cbc8.el8ost RHEA-2020:0283 puppet-kafka 5.3.1-0.20191004150127.88aa866.el8ost RHEA-2020:0283 puppet-keepalived 0.0.2-0.20191002154205.bbca37a.el8ost RHEA-2020:0283 puppet-keystone 15.4.1-0.20191019043505.e57542a.el8ost RHEA-2020:0283 puppet-kibana3 0.0.5-0.20191002144605.6ca9631.el8ost RHEA-2020:0283 puppet-kmod 2.3.2-0.20191004134845.41e2a2b.el8ost RHEA-2020:0283 puppet-manila 15.4.1-0.20191014132051.bd3540c.el8ost RHEA-2020:0283 puppet-memcached 3.4.0-0.20191002174507.66d3a66.el8ost RHEA-2020:0283 puppet-midonet 1.0.0-0.20191002180647.a8cec1d.el8ost RHEA-2020:0283 puppet-mistral 15.4.1-0.20191014143431.c733b8a.el8ost RHEA-2020:0283 puppet-module-data 0.5.1-0.20191002153420.28dafce.el8ost RHEA-2020:0283 puppet-mysql 6.2.1-0.20191016083905.6fabbcb.el8ost RHEA-2020:0283 puppet-n1k-vsm 0.0.2-0.20191002180648.92401b8.el8ost RHEA-2020:0283 puppet-neutron 15.4.1-0.20191105131130.1111e3d.el8ost RHEA-2020:0283 puppet-nova 15.4.1-0.20191126042922.b1bb388.el8ost RHEA-2020:0283 puppet-nssdb 1.0.2-0.20191002154951.2ed2a2d.el8ost RHEA-2020:0283 puppet-octavia 15.4.1-0.20191123040214.8d86546.el8ost RHEA-2020:0283 puppet-opendaylight 8.4.3-0.20191002173417.bbe7ce5.el8ost RHEA-2020:0283 puppet-openstack_extras 15.4.1-0.20191014142330.8ba5522.el8ost RHEA-2020:0283 puppet-openstacklib 15.4.1-0.20191014170135.94b2016.el8ost RHEA-2020:0283 puppet-oslo 15.4.1-0.20191014135439.00f661d.el8ost RHEA-2020:0283 puppet-ovn 15.4.1-0.20191014133046.192ac4e.el8ost RHEA-2020:0283 puppet-pacemaker 0.8.1-0.20191218041139.80dcc19.el8ost RHEA-2020:0283 puppet-panko 15.4.1-0.20191014140134.49b7b3e.el8ost RHEA-2020:0283 puppet-placement 2.4.1-0.20191019044656.afddd56.el8ost RHEA-2020:0283 puppet-qdr 4.4.1-0.20191014131155.d141271.el8ost RHEA-2020:0283 puppet-rabbitmq 9.1.1-0.20191004100807.75574b8.el8ost RHEA-2020:0283 puppet-redis 4.2.2-0.20191004102502.be8d097.el8ost RHEA-2020:0283 puppet-remote 10.0.0-0.20191002155842.7420908.el8ost RHEA-2020:0283 puppet-rsync 1.1.2-0.20191002175500.a7d4f84.el8ost RHEA-2020:0283 puppet-rsyslog 3.3.1-0.20191017160011.0c2b6c8.el8ost RHEA-2020:0283 puppet-sahara 15.4.1-0.20191014141333.5d28c21.el8ost RHEA-2020:0283 puppet-sensu 2.62.4-0.20191002175556.20361a4.el8ost RHEA-2020:0283 puppet-server 5.5.10-5.el8ost RHEA-2020:0283 puppet-snmp 3.9.1-0.20191002162020.5d73485.el8ost RHEA-2020:0283 puppet-ssh 6.0.1-0.20191004101612.65570a3.el8ost RHEA-2020:0283 puppet-staging 1.0.5-0.20191002153419.b466d93.el8ost RHEA-2020:0283 puppet-stdlib 6.1.0-0.20191003115605.5aa891c.el8ost RHEA-2020:0283 puppet-swift 15.4.1-0.20191014152413.654e964.el8ost RHEA-2020:0283 puppet-sysctl 0.0.13-0.20191002173418.a3d160d.el8ost RHEA-2020:0283 puppet-systemd 2.6.1-0.20191004105758.f0ce6f0.el8ost RHEA-2020:0283 puppet-timezone 5.1.2-0.20191004104756.21b4a58.el8ost RHEA-2020:0283 puppet-tomcat 3.1.0-0.20191004111140.a3f92d1.el8ost RHEA-2020:0283 puppet-tripleo 11.4.1-0.20200118215809.6f9bf6c.el8ost RHEA-2020:0283 puppet-trove 15.4.1-0.20191014140632.4710735.el8ost RHEA-2020:0283 puppet-uchiwa 2.1.1-0.20191002164306.837c669.el8ost RHEA-2020:0283 puppet-vcsrepo 3.0.0-0.20191004152414.b06d5d3.el8ost RHEA-2020:0283 puppet-veritas_hyperscale 1.0.0-0.20191002162020.7c7868a.el8ost RHEA-2020:0283 puppet-vswitch 11.4.1-0.20191014172742.4fc423f.el8ost RHEA-2020:0283 puppet-xinetd 3.3.1-0.20191004094526.d768da2.el8ost RHEA-2020:0283 puppet-zaqar 15.4.1-0.20191014144423.88b97ec.el8ost RHEA-2020:0283 puppet-zookeeper 0.9.0-0.20191017160909.5877cbf.el8ost RHEA-2020:0283 python-django-bash-completion 2.0.10-1.el8ost RHEA-2020:0283 python-openstackclient-lang 4.0.0-0.20191025160014.aa64eb6.el8ost RHEA-2020:0283 python-oslo-cache-lang 1.37.0-0.20191004134644.f67262f.el8ost RHEA-2020:0283 python-oslo-concurrency-lang 3.30.0-0.20191004131147.610df38.el8ost RHEA-2020:0283 python-oslo-db-lang 5.0.2-0.20191004121119.fb40cdb.el8ost RHEA-2020:0283 python-oslo-i18n-lang 3.24.0-0.20191004120029.91b39bb.el8ost RHEA-2020:0283 python-oslo-log-lang 3.44.1-0.20191004135842.3ff497d.el8ost RHEA-2020:0283 python-oslo-middleware-lang 3.38.1-0.20191004114631.9bae80e.el8ost RHEA-2020:0283 python-oslo-policy-lang 2.3.2-0.20190919144636.e433a3c.el8ost RHEA-2020:0283 python-oslo-privsep-lang 1.33.3-0.20190920073426.ddde706.el8ost RHEA-2020:0283 python-oslo-utils-lang 3.41.3-0.20191118120255.cae9aa7.el8ost RHEA-2020:0283 python-oslo-versionedobjects-lang 1.36.1-0.20191025164659.14ee7e0.el8ost RHEA-2020:0283 python-oslo-vmware-lang 2.34.1-0.20190919143435.c592465.el8ost RHEA-2020:0283 python-pycadf-common 2.10.0-0.20191004114040.d113c15.el8ost RHEA-2020:0283 python3-Bottleneck 1.2.1-8.el8ost RHEA-2020:0283 python3-Cython 0.29.2-2.el8ost RHEA-2020:0283 python3-GitPython 2.1.11-2.el8ost RHEA-2020:0283 python3-ImcSdk 0.9.6-2.el8ost RHEA-2020:0283 python3-SecretStorage 2.3.1-7.el8ost RHEA-2020:0283 python3-XStatic 1.0.1-9.el8ost RHEA-2020:0283 python3-XStatic-Angular 1.5.8.0-6.el8ost RHEA-2020:0283 python3-XStatic-Angular-Bootstrap 2.2.0.0-6.el8ost RHEA-2020:0283 python3-XStatic-Angular-FileUpload 12.0.4.0-10.el8ost RHEA-2020:0283 python3-XStatic-Angular-Gettext 2.3.8.0-2.el8ost RHEA-2020:0283 python3-XStatic-Angular-Schema-Form 0.8.13.0-1.el8ost RHEA-2020:0283 python3-XStatic-Angular-UUID 0.0.4.0-5.el8ost RHEA-2020:0283 python3-XStatic-Angular-Vis 4.16.0.0-2.el8ost RHEA-2020:0283 python3-XStatic-Angular-lrdragndrop 1.0.2.2-12.el8ost RHEA-2020:0283 python3-XStatic-Bootstrap-Datepicker 1.3.1.0-12.el8ost RHEA-2020:0283 python3-XStatic-Bootstrap-SCSS 3.3.7.1-6.el8ost RHEA-2020:0283 python3-XStatic-D3 3.5.17.0-6.el8ost RHEA-2020:0283 python3-XStatic-FileSaver 1.3.2.0-2.el8ost RHEA-2020:0283 python3-XStatic-Font-Awesome 4.7.0.0-6.el8ost RHEA-2020:0283 python3-XStatic-Hogan 2.0.0.2-13.el8ost RHEA-2020:0283 python3-XStatic-JQuery-Migrate 1.2.1.1-13.el8ost RHEA-2020:0283 python3-XStatic-JQuery-TableSorter 2.14.5.1-13.el8ost RHEA-2020:0283 python3-XStatic-JQuery-quicksearch 2.0.3.1-13.el8ost RHEA-2020:0283 python3-XStatic-JS-Yaml 3.8.1.0-3.el8ost RHEA-2020:0283 python3-XStatic-JSEncrypt 2.3.1.1-5.el8ost RHEA-2020:0283 python3-XStatic-Jasmine 2.4.1.1-5.el8ost RHEA-2020:0283 python3-XStatic-Json2yaml 0.1.1.0-2.el8ost RHEA-2020:0283 python3-XStatic-Magic-Search 0.2.5.1-8.el8ost RHEA-2020:0283 python3-XStatic-Rickshaw 1.5.0.0-15.el8ost RHEA-2020:0283 python3-XStatic-Spin 1.2.5.2-14.el8ost RHEA-2020:0283 python3-XStatic-bootswatch 3.3.7.0-7.el8ost RHEA-2020:0283 python3-XStatic-jQuery 1.12.4.1-2.el8ost RHEA-2020:0283 python3-XStatic-jquery-ui 1.12.0.1-5.el8ost RHEA-2020:0283 python3-XStatic-mdi 1.4.57.0-9.el8ost RHEA-2020:0283 python3-XStatic-objectpath 1.2.1.0-2.el8ost RHEA-2020:0283 python3-XStatic-roboto-fontface 0.5.0.0-9.el8ost RHEA-2020:0283 python3-XStatic-smart-table 1.4.13.2-5.el8ost RHEA-2020:0283 python3-XStatic-termjs 0.0.7.0-5.el8ost RHEA-2020:0283 python3-XStatic-tv4 1.2.7.0-1.el8ost RHEA-2020:0283 python3-adal 1.2.0-2.el8ost RHEA-2020:0283 python3-alembic 1.0.7-1.el8ost RHEA-2020:0283 python3-amqp 2.4.1-1.el8ost RHEA-2020:0283 python3-aniso8601 0.82-4.el8ost RHEA-2020:0283 python3-ansible-runner 1.2.0-2.el8ost RHEA-2020:0283 python3-anyjson 0.3.3-8.1.el8ost RHEA-2020:0283 python3-aodh 9.0.1-0.20191017053848.9e17daa.el8ost RHEA-2020:0283 python3-aodhclient 1.3.0-0.20190924061353.a8651ec.el8ost RHEA-2020:0283 python3-appdirs 1.4.0-7.el8ost RHEA-2020:0283 python3-autobahn 19.1.1-1.el8ost RHEA-2020:0283 python3-automaton 1.17.0-0.20190920064720.5e82feb.el8ost RHEA-2020:0283 python3-barbican 9.0.1-0.20191017050432.3211586.el8ost RHEA-2020:0283 python3-barbican-tests-tempest 0.3.0-0.20191001204409.a226433.el8ost RHEA-2020:0283 python3-barbicanclient 4.9.0-0.20191004111947.9c0e02d.el8ost RHEA-2020:0283 python3-bcrypt 3.1.6-2.el8ost RHEA-2020:0283 python3-beautifulsoup4 4.6.0-1.el8ost RHEA-2020:0283 python3-boto 2.45.0-7.el8ost RHEA-2020:0283 python3-boto3 1.9.101-1.el8ost RHEA-2020:0283 python3-botocore 1.12.119-1.el8ost RHEA-2020:0283 python3-cachetools 3.1.0-1.el8ost RHEA-2020:0283 python3-castellan 1.3.1-0.20191004110647.54af64d.el8ost RHEA-2020:0283 python3-ceilometer 13.0.1-0.20191129200352.306a818.el8ost RHEA-2020:0283 python3-ceilometermiddleware 1.5.0-0.20191004105946.fc21cde.el8ost RHEA-2020:0283 python3-certifi 2018.10.15-4.el8ost RHEA-2020:0283 python3-cinder 15.0.2-0.20200123220928.900f769.el8ost RHEA-2020:0283 python3-cinder-tests-tempest 0.3.1-0.20191226120013.6f0b4fc.el8ost RHEA-2020:0283 python3-cinderclient 5.0.0-0.20191025161022.a63d4d6.el8ost RHEA-2020:0283 python3-cinderlib 0.9.1-0.20191128120403.f7d4114.el8ost RHEA-2020:0283 python3-cinderlib-tests-functional 0.9.1-0.20191128120403.f7d4114.el8ost RHEA-2020:0283 python3-cliff 2.16.0-0.20191004122213.6b6b186.el8ost RHEA-2020:0283 python3-cmd2 0.6.8-10.el8ost RHEA-2020:0283 python3-collectd-gnocchi 1.7.2-0.20191010202650.de115a7.el8ost RHEA-2020:0283 python3-collectd-rabbitmq-monitoring 0.0.6-3.el8ost RHEA-2020:0283 python3-colorama 0.4.1-1.el8ost RHEA-2020:0283 python3-construct 2.8.10-2.el8ost RHEA-2020:0283 python3-contextlib2 0.5.5-8.el8ost RHEA-2020:0283 python3-cotyledon 1.7.3-4.el8ost RHEA-2020:0283 python3-cradox 2.1.0-3.el8ost RHEA-2020:0283 python3-croniter 0.3.27-1.el8ost RHEA-2020:0283 python3-crypto 2.6.1-18.el8ost RHEA-2020:0283 python3-cursive 0.2.2-0.20190920063425.d7cea1f.el8ost RHEA-2020:0283 python3-cycler 0.10.0-7.el8ost RHEA-2020:0283 python3-daemon 2.1.2-9.el8ost RHEA-2020:0283 python3-daiquiri 1.5.0-2.el8ost RHEA-2020:0283 python3-dateutil 2.8.0-1.el8ost RHEA-2020:0283 python3-ddt 1.0.1-4.el8ost RHEA-2020:0283 python3-debtcollector 1.22.0-0.20191004133750.0be4911.el8ost RHEA-2020:0283 python3-defusedxml 0.5.0-2.el8ost RHEA-2020:0283 python3-designate-tests-tempest 0.7.0-0.20190930205934.1096ab9.el8ost RHEA-2020:0283 python3-designateclient 3.0.0-0.20190920091151.093d8d7.el8ost RHEA-2020:0283 python3-dictdiffer 0.7.1-2.el8ost RHEA-2020:0283 python3-django 2.0.10-1.el8ost RHEA-2020:0283 python3-django-appconf 1.0.1-5.el8ost RHEA-2020:0283 python3-django-compressor 2.2-4.el8ost RHEA-2020:0283 python3-django-debreach 1.5.2-1.el8ost RHEA-2020:0283 python3-django-horizon 16.0.1-0.20191122201406.897fa1b.el8ost RHEA-2020:0283 python3-django-pyscss 2.0.2-11.el8ost RHEA-2020:0283 python3-dogpile-cache 0.6.8-1.el8ost RHEA-2020:0283 python3-dracclient 3.1.2-0.20191008085624.78407a0.el8ost RHEA-2020:0283 python3-ec2-api 9.0.1-0.20191018040001.fb16aa4.el8ost RHEA-2020:0283 python3-editor 0.4-5.el8ost RHEA-2020:0283 python3-etcd3gw 0.2.4-4.el8ost RHEA-2020:0283 python3-eventlet 0.25.1-1.1.el8ost RHEA-2020:0283 python3-extras 1.0.0-5.el8ost RHEA-2020:0283 python3-falcon 1.4.1-6.el8ost RHEA-2020:0283 python3-fasteners 0.14.1-15.el8ost RHEA-2020:0283 python3-fixtures 3.0.0-8.el8ost RHEA-2020:0283 python3-flake8 3.5.0-6.el8ost RHEA-2020:0283 python3-flask 1.0.2-2.el8ost RHEA-2020:0283 python3-flask-restful 0.3.6-8.el8ost RHEA-2020:0283 python3-funcsigs 1.0.2-3.el8ost RHEA-2020:0283 python3-future 0.16.0-6.el8ost RHEA-2020:0283 python3-futurist 1.9.0-0.20191004125956.25ffb8f.el8ost RHEA-2020:0283 python3-gabbi 1.42.1-3.el8ost RHEA-2020:0283 python3-gitdb 2.0.3-6.el8ost RHEA-2020:0283 python3-glance 19.0.1-0.20191030153439.6e3ced8.el8ost RHEA-2020:0283 python3-glance-store 1.0.1-0.20190930223428.9383fde.el8ost RHEA-2020:0283 python3-glanceclient 2.17.0-0.20190919135328.40c19aa.el8ost RHEA-2020:0283 python3-gnocchi 4.3.5-0.20191008121313.8cbc9ee.el8ost RHEA-2020:0283 python3-gnocchiclient 7.0.4-0.20190919175421.64814b9.el8ost RHEA-2020:0283 python3-google-auth 1.3.0-1.el8ost RHEA-2020:0283 python3-greenlet 0.4.14-5.el8ost RHEA-2020:0283 python3-gunicorn 19.9.0-5.el8ost RHEA-2020:0283 python3-hardware 0.22.1-0.20191202210240.bfd94c3.el8ost RHEA-2020:0283 python3-hardware-detect 0.22.1-0.20191202210240.bfd94c3.el8ost RHEA-2020:0283 python3-heat-agent 1.10.1-0.20191022061131.96b819c.el8ost RHEA-2020:0283 python3-heat-agent-ansible 1.10.1-0.20191022061131.96b819c.el8ost RHEA-2020:0283 python3-heat-agent-apply-config 1.10.1-0.20191022061131.96b819c.el8ost RHEA-2020:0283 python3-heat-agent-docker-cmd 1.10.1-0.20191022061131.96b819c.el8ost RHEA-2020:0283 python3-heat-agent-hiera 1.10.1-0.20191022061131.96b819c.el8ost RHEA-2020:0283 python3-heat-agent-json-file 1.10.1-0.20191022061131.96b819c.el8ost RHEA-2020:0283 python3-heat-agent-puppet 1.10.1-0.20191022061131.96b819c.el8ost RHEA-2020:0283 python3-heat-tests-tempest 0.4.1-0.20191004095736.5d966f9.el8ost RHEA-2020:0283 python3-heatclient 1.18.0-0.20190923105620.eca1637.el8ost RHEA-2020:0283 python3-horizon-tests-tempest 0.2.0-0.20190930205041.730608b.el8ost RHEA-2020:0283 python3-ironic-inspector-client 3.7.0-0.20190923163033.d95a4cd.el8ost RHEA-2020:0283 python3-ironic-lib 2.21.0-0.20190923194637.2c7d1cf.el8ost RHEA-2020:0283 python3-ironic-neutron-agent 1.4.1-0.20191008133428.d0d3c45.el8ost RHEA-2020:0283 python3-ironic-prometheus-exporter 0.0.1-0.20190712090404.f7e9344.el8ost RHEA-2020:0283 python3-ironic-python-agent 5.0.1-0.20191125202513.9681fd7.el8ost RHEA-2020:0283 python3-ironic-tests-tempest 1.5.2-0.20191002113421.ed09bcc.el8ost RHEA-2020:0283 python3-ironicclient 3.1.0-0.20190926124316.04ef2d7.el8ost RHEA-2020:0283 python3-iso8601 0.1.11-9.el8ost RHEA-2020:0283 python3-json-logger 0.1.7-2.el8ost RHEA-2020:0283 python3-jsonpath-rw 1.2.3-3.el8ost RHEA-2020:0283 python3-jsonpath-rw-ext 1.0.0-2.el8ost RHEA-2020:0283 python3-kazoo 2.2.1-2.el8ost RHEA-2020:0283 python3-kerberos 1.2.5-8.el8ost RHEA-2020:0283 python3-keyring 17.1.1-1.el8ost RHEA-2020:0283 python3-keystone 16.0.1-0.20191210095025.bd3f637.el8ost RHEA-2020:0283 python3-keystone-tests-tempest 0.3.0-0.20190930205636.806103f.el8ost RHEA-2020:0283 python3-keystoneauth1 3.17.1-0.20191004143644.8b06c57.el8ost RHEA-2020:0283 python3-keystoneclient 3.21.0-0.20191004144136.79f150f.el8ost RHEA-2020:0283 python3-keystonemiddleware 7.0.1-0.20191004112333.0a65b14.el8ost RHEA-2020:0283 python3-kiwisolver 1.0.1-3.el8ost RHEA-2020:0283 python3-kombu 4.2.2-1.el8ost RHEA-2020:0283 python3-kubernetes 8.0.0-6.el8ost RHEA-2020:0283 python3-kuryr-tests-tempest 0.5.0-0.20191004165750.2194649.el8ost RHEA-2020:0283 python3-ldap3 2.4.1-5.el8ost RHEA-2020:0283 python3-ldappool 2.4.0-2.el8ost RHEA-2020:0283 python3-lesscpy 0.13.0-9.el8ost RHEA-2020:0283 python3-linecache2 1.0.0-3.el8ost RHEA-2020:0283 python3-lockfile 0.11.0-9.el8ost RHEA-2020:0283 python3-logutils 0.3.5-6.1.el8ost RHEA-2020:0283 python3-lz4 2.1.2-3.el8ost RHEA-2020:0283 python3-magnumclient 2.15.0-0.20190923154724.93acc92.el8ost RHEA-2020:0283 python3-manila 9.1.0-0.20191112165911.9da5594.el8ost RHEA-2020:0283 python3-manila-tests-tempest 0.4.0-0.20191004155736.171241e.el8ost RHEA-2020:0283 python3-manilaclient 1.29.0-0.20190923115452.1b2cafb.el8ost RHEA-2020:0283 python3-markupsafe 1.1.0-2.el8ost RHEA-2020:0283 python3-matplotlib 3.0.2-1.el8ost.4 RHEA-2020:0283 python3-matplotlib-data 3.0.2-1.el8ost.4 RHEA-2020:0283 python3-matplotlib-data-fonts 3.0.2-1.el8ost.4 RHEA-2020:0283 python3-mccabe 0.6.1-7.1.el8ost RHEA-2020:0283 python3-memcached 1.58-8.el8ost RHEA-2020:0283 python3-metalsmith 0.15.0-0.20190923165127.a77cfb2.el8ost RHEA-2020:0283 python3-microversion-parse 0.2.1-0.20190920083425.ae5e3ce.el8ost RHEA-2020:0283 python3-migrate 0.12.0-2.el8ost RHEA-2020:0283 python3-mimeparse 1.6.0-5.el8ost RHEA-2020:0283 python3-mistral 9.0.2-0.20191125120837.6651519.el8ost RHEA-2020:0283 python3-mistral-lib 1.2.1-0.20191118120254.4bac2b2.el8ost RHEA-2020:0283 python3-mistral-tests-tempest 0.3.0-0.20190930023419.3c3a6cc.el8ost RHEA-2020:0283 python3-mistralclient 3.10.0-0.20190920090831.dc246bf.el8ost RHEA-2020:0283 python3-mock 3.0.5-1.el8ost RHEA-2020:0283 python3-monotonic 1.5-2.el8ost RHEA-2020:0283 python3-more-itertools 4.1.0-2.el8ost RHEA-2020:0283 python3-mox3 0.28.0-0.20191004131055.0a1e5b9.el8ost RHEA-2020:0283 python3-msgpack 0.6.1-3.el8ost RHEA-2020:0283 python3-munch 2.2.0-3.el8ost RHEA-2020:0283 python3-netifaces 0.10.9-2.el8ost RHEA-2020:0283 python3-network-runner 0.1.7-3.el8ost RHEA-2020:0283 python3-networking-ansible 3.0.1-0.20191127205129.e24d01c.1.el8ost RHEA-2020:0283 python3-networking-baremetal 1.4.1-0.20191008133428.d0d3c45.el8ost RHEA-2020:0283 python3-networking-bgpvpn 11.0.1-0.20191112132430.584b2f4.el8ost RHEA-2020:0283 python3-networking-bgpvpn-dashboard 11.0.1-0.20191112132430.584b2f4.el8ost RHEA-2020:0283 python3-networking-bgpvpn-heat 11.0.1-0.20191112132430.584b2f4.el8ost RHEA-2020:0283 python3-networking-bigswitch 12.0.3-0.20191121040236.f4c1e96.el8ost RHEA-2020:0283 python3-networking-fujitsu 8.0.1-0.20191010203151.1aafc66.el8ost RHEA-2020:0283 python3-networking-l2gw 15.0.1-0.20191014184513.0de2604.el8ost RHEA-2020:0283 python3-networking-l2gw-tests-tempest 0.1.1-0.20191008120920.a3af33b.el8ost RHEA-2020:0283 python3-networking-ovn 7.0.1-0.20191205040313.2ef5322.el8ost RHEA-2020:0283 python3-networking-ovn-metadata-agent 7.0.1-0.20191205040313.2ef5322.el8ost RHEA-2020:0283 python3-networking-ovn-migration-tool 7.0.1-0.20191205040313.2ef5322.el8ost RHEA-2020:0283 python3-networking-sfc 9.0.1-0.20191017043415.699eafd.el8ost RHEA-2020:0283 python3-networking-vmware-nsx 15.0.1-0.20191202204459.63b82d2.el8ost RHEA-2020:0283 python3-networkx 1.11-15.el8ost RHEA-2020:0283 python3-networkx-core 1.11-15.el8ost RHEA-2020:0283 python3-neutron 15.0.1-0.20191129040411.998d93b.el8ost RHEA-2020:0283 python3-neutron-dynamic-routing 15.0.1-0.20191017043427.c2cf1cd.el8ost RHEA-2020:0283 python3-neutron-lib 1.29.1-0.20190923154030.4ef4b71.el8ost RHEA-2020:0283 python3-neutron-lib-tests 1.29.1-0.20190923154030.4ef4b71.el8ost RHEA-2020:0283 python3-neutron-tests-tempest 0.6.0-0.20191004160137.3a4fc79.el8ost RHEA-2020:0283 python3-neutronclient 6.14.0-0.20190919181709.115f60f.el8ost RHEA-2020:0283 python3-nova 20.0.2-0.20191230035951.27bfd0b.el8ost RHEA-2020:0283 python3-novaclient 15.1.0-0.20190919143437.cd396b8.el8ost RHEA-2020:0283 python3-novajoin 1.3.0-0.20191217200124.265146e.el8ost RHEA-2020:0283 python3-novajoin-tests-tempest 0.0.1-0.20191114210523.cb81f53.el8ost RHEA-2020:0283 python3-numexpr 2.6.1-4.el8ost RHEA-2020:0283 python3-numpy 1.15.1-2.2.el8ost RHEA-2020:0283 python3-numpy-f2py 1.15.1-2.2.el8ost RHEA-2020:0283 python3-octavia 5.0.1-0.20191128120414.1920286.el8ost RHEA-2020:0283 python3-octavia-lib 1.4.0-0.20190920085629.cec8b19.el8ost RHEA-2020:0283 python3-octavia-tests-tempest 1.2.0-0.20191004154817.393d3c1.el8ost RHEA-2020:0283 python3-octavia-tests-tempest-golang 1.2.0-0.20191004154817.393d3c1.el8ost RHEA-2020:0283 python3-octaviaclient 1.10.0-0.20191126121848.0e00a90.el8ost RHEA-2020:0283 python3-openshift 0.8.1-1.el8ost RHEA-2020:0283 python3-openstackclient 4.0.0-0.20191025160014.aa64eb6.el8ost RHEA-2020:0283 python3-openstacksdk 0.36.0-0.20191004153514.8b85e8c.el8ost RHEA-2020:0283 python3-os-brick 2.10.0-0.20190923142035.cc22ac0.el8ost RHEA-2020:0283 python3-os-client-config 1.33.0-0.20191004115925.d0eea17.el8ost RHEA-2020:0283 python3-os-ken 0.4.1-0.20190923143906.8f7851a.el8ost RHEA-2020:0283 python3-os-resource-classes 0.5.0-0.20190923162212.0dd643b.el8ost RHEA-2020:0283 python3-os-service-types 1.7.0-0.20191004125900.0b2f473.el8ost RHEA-2020:0283 python3-os-testr 1.1.0-0.20191004112840.414bbf6.el8ost RHEA-2020:0283 python3-os-traits 0.16.0-0.20190920084752.5a477b8.el8ost RHEA-2020:0283 python3-os-vif 1.17.0-0.20190923155827.3a08cc4.el8ost RHEA-2020:0283 python3-os-win 4.3.2-0.20190923114149.c5450ed.el8ost RHEA-2020:0283 python3-os-xenapi 0.3.4-0.20190919173434.12c68d0.el8ost RHEA-2020:0283 python3-osc-lib 1.14.1-0.20191004121222.a0d9746.el8ost RHEA-2020:0283 python3-osc-placement 1.7.0-0.20190923165353.8bbca01.el8ost RHEA-2020:0283 python3-oslo-cache 1.37.0-0.20191004134644.f67262f.el8ost RHEA-2020:0283 python3-oslo-concurrency 3.30.0-0.20191004131147.610df38.el8ost RHEA-2020:0283 python3-oslo-config 6.11.1-0.20191004140143.c4d8fa7.el8ost RHEA-2020:0283 python3-oslo-context 2.23.0-0.20191004132250.07f068d.el8ost RHEA-2020:0283 python3-oslo-db 5.0.2-0.20191004121119.fb40cdb.el8ost RHEA-2020:0283 python3-oslo-i18n 3.24.0-0.20191004120029.91b39bb.el8ost RHEA-2020:0283 python3-oslo-log 3.44.1-0.20191004135842.3ff497d.el8ost RHEA-2020:0283 python3-oslo-messaging 10.2.0-0.20191009113335.b7e9faf.el8ost RHEA-2020:0283 python3-oslo-middleware 3.38.1-0.20191004114631.9bae80e.el8ost RHEA-2020:0283 python3-oslo-policy 2.3.2-0.20190919144636.e433a3c.el8ost RHEA-2020:0283 python3-oslo-privsep 1.33.3-0.20190920073426.ddde706.el8ost RHEA-2020:0283 python3-oslo-reports 1.30.0-0.20190918125923.cf35fec.el8ost RHEA-2020:0283 python3-oslo-rootwrap 5.16.1-0.20190919173436.c6babc7.el8ost RHEA-2020:0283 python3-oslo-serialization 2.29.2-0.20191004132544.fa399b6.el8ost RHEA-2020:0283 python3-oslo-service 1.40.2-0.20191004142639.a7621c8.el8ost RHEA-2020:0283 python3-oslo-upgradecheck 0.3.2-0.20190919180723.e1df576.el8ost RHEA-2020:0283 python3-oslo-utils 3.41.3-0.20191118120255.cae9aa7.el8ost RHEA-2020:0283 python3-oslo-versionedobjects 1.36.1-0.20191025164659.14ee7e0.el8ost RHEA-2020:0283 python3-oslo-vmware 2.34.1-0.20190919143435.c592465.el8ost RHEA-2020:0283 python3-oslotest 3.8.1-0.20191004133547.7ad16de.el8ost RHEA-2020:0283 python3-osprofiler 2.8.2-0.20190920064824.d431c7a.el8ost RHEA-2020:0283 python3-ovirt-engine-sdk4 4.2.9-4.el8ost RHEA-2020:0283 python3-ovsdbapp 0.17.0-0.20190923110620.74b52bb.el8ost RHEA-2020:0283 python3-pandas 0.23.4-4.el8ost RHEA-2020:0283 python3-panko 7.0.1-0.20191017041322.9b551e7.el8ost RHEA-2020:0283 python3-pankoclient 0.5.0-0.20191010210258.572aee9.el8ost RHEA-2020:0283 python3-paramiko 2.4.2-2.el8ost RHEA-2020:0283 python3-passlib 1.7.0-5.el8ost RHEA-2020:0283 python3-paste 2.0.3-6.el8ost RHEA-2020:0283 python3-paste-deploy 1.5.2-13.el8ost RHEA-2020:0283 python3-patrole-tests-tempest 0.7.0-0.20191003113049.a5068ba.el8ost RHEA-2020:0283 python3-paunch 5.3.1-0.20191214120154.b2f3c3c.el8ost RHEA-2020:0283 python3-pbr 5.1.2-2.el8ost RHEA-2020:0283 python3-pecan 1.3.2-3.el8ost RHEA-2020:0283 python3-pexpect 4.6-2.el8ost RHEA-2020:0283 python3-pint 0.9-1.el8ost RHEA-2020:0283 python3-placement 2.0.0-0.20191017052028.36beddb.el8ost RHEA-2020:0283 python3-pluggy 0.8.1-1.el8ost RHEA-2020:0283 python3-posix_ipc 0.9.8-20.el8ost RHEA-2020:0283 python3-proliantutils 2.7.0-0.20190924060201.5927a93.el8ost RHEA-2020:0283 python3-prometheus_client 0.6.0-1.el8ost RHEA-2020:0283 python3-pycadf 2.10.0-0.20191004114040.d113c15.el8ost RHEA-2020:0283 python3-pycodestyle 2.4.0-3.1.el8ost RHEA-2020:0283 python3-pyeclib 1.5.0-5.el8ost RHEA-2020:0283 python3-pyflakes 2.0.0-7.el8ost RHEA-2020:0283 python3-pyghmi 1.0.22-2.el8ost RHEA-2020:0283 python3-pynacl 1.3.0-1.el8ost RHEA-2020:0283 python3-pyngus 2.3.0-1.el8ost RHEA-2020:0283 python3-pyparsing 2.3.1-1.el8ost RHEA-2020:0283 python3-pyrabbit2 1.0.6-2.el8ost RHEA-2020:0283 python3-pyroute2 0.5.6-1.el8ost RHEA-2020:0283 python3-pysaml2 4.6.5-1.el8ost RHEA-2020:0283 python3-pysendfile 2.0.1-13.el8ost RHEA-2020:0283 python3-pysnmp 4.4.8-2.el8ost RHEA-2020:0283 python3-pystache 0.5.3-3.el8ost RHEA-2020:0283 python3-pytest 3.5.1-1.el8ost RHEA-2020:0283 python3-pytimeparse 1.1.5-2.1.el8ost RHEA-2020:0283 python3-pyxattr 0.5.3-19.el8ost RHEA-2020:0283 python3-qpid-proton 0.29.0-4.el8 RHEA-2020:0283 python3-rcssmin 1.0.6-4.el8ost RHEA-2020:0283 python3-redis 3.1.0-1.el8ost RHEA-2020:0283 python3-repoze-lru 0.4-9.el8ost RHEA-2020:0283 python3-requests-kerberos 0.8.0-4.el8ost RHEA-2020:0283 python3-requestsexceptions 1.4.0-0.20191004123913.d7ac0ff.el8ost RHEA-2020:0283 python3-retrying 1.2.3-5.el8ost RHEA-2020:0283 python3-rfc3986 1.2.0-2.el8ost RHEA-2020:0283 python3-rhosp-openvswitch 2.11-0.5.el8ost RHEA-2020:0283 python3-rjsmin 1.0.12-5.el8ost RHEA-2020:0283 python3-routes 2.4.1-2.el8ost RHEA-2020:0283 python3-rsa 3.4.2-9.el8ost RHEA-2020:0283 python3-rsd-lib 1.2.0-0.20191009110244.6090753.el8ost RHEA-2020:0283 python3-rsdclient 1.0.0-0.20191009110242.5e05430.el8ost RHEA-2020:0283 python3-ruamel-yaml 0.15.41-4.el8ost RHEA-2020:0283 python3-s3transfer 0.2.0-1.el8ost RHEA-2020:0283 python3-saharaclient 2.3.0-0.20191025165324.3107b45.el8ost RHEA-2020:0283 python3-scciclient 0.8.2-0.20191125120829.3e595d8.el8ost RHEA-2020:0283 python3-scrypt 0.8.0-4.el8ost RHEA-2020:0283 python3-scss 1.3.4-9.el8ost RHEA-2020:0283 python3-setproctitle 1.1.9-7.el8ost RHEA-2020:0283 python3-shade 1.32.0-0.20191010212050.47fe056.el8ost RHEA-2020:0283 python3-simplegeneric 0.8-8.el8ost RHEA-2020:0283 python3-simplejson 3.16.0-3.el8ost RHEA-2020:0283 python3-six 1.12.0-1.el8ost RHEA-2020:0283 python3-smmap 2.0.3-5.el8ost RHEA-2020:0283 python3-snappy 0.5-10.1.el8ost RHEA-2020:0283 python3-sqlalchemy-collectd 0.0.4-1.el8ost RHEA-2020:0283 python3-sqlalchemy-utils 0.32.12-8.el8ost RHEA-2020:0283 python3-sqlparse 0.2.2-6.1.el8ost RHEA-2020:0283 python3-statsd 3.2.1-6.el8ost RHEA-2020:0283 python3-stestr 2.3.1-5.el8ost RHEA-2020:0283 python3-stevedore 1.31.0-0.20191004113432.6817543.el8ost RHEA-2020:0283 python3-string_utils 0.6.0-4.el8ost RHEA-2020:0283 python3-subunit 1.3.0-7.el8ost RHEA-2020:0283 python3-sushy 2.0.0-0.20190923170230.eaa2cd7.el8ost RHEA-2020:0283 python3-sushy-oem-idrac 0.0.3-0.20191213062326.7b31b0b.el8ost RHEA-2020:0283 python3-swift 2.23.2-0.20191105131958.e1e1f4d.el8ost RHEA-2020:0283 python3-swiftclient 3.8.1-0.20190920094747.72b90fe.el8ost RHEA-2020:0283 python3-sysv_ipc 0.7.0-6.el8ost RHEA-2020:0283 python3-tables 3.3.0-6.el8ost RHEA-2020:0283 python3-tap-as-a-service 5.0.1-0.20191010204151.21aa47c.el8ost RHEA-2020:0283 python3-taskflow 3.7.1-0.20190920075933.f0eae2c.el8ost RHEA-2020:0283 python3-telemetry-tests-tempest 0.4.0-0.20191001213421.d60e6e2.el8ost RHEA-2020:0283 python3-tempest 22.1.0-0.20191017061849.c933452.el8ost RHEA-2020:0283 python3-tempest-tests 22.1.0-0.20191017061849.c933452.el8ost RHEA-2020:0283 python3-tempestconf 2.3.0-0.20191004163034.594614f.el8ost RHEA-2020:0283 python3-tempita 0.5.1-20.el8ost RHEA-2020:0283 python3-tenacity 5.0.2-3.el8ost RHEA-2020:0283 python3-testrepository 0.0.20-15.el8ost RHEA-2020:0283 python3-testscenarios 0.5.0-9.el8ost RHEA-2020:0283 python3-testtools 2.3.0-8.el8ost RHEA-2020:0283 python3-tinyrpc 0.5-4.20170523git1f38ac.el8ost RHEA-2020:0283 python3-tooz 1.66.2-0.20190919123439.5b280a8.el8ost RHEA-2020:0283 python3-traceback2 1.4.0-3.el8ost RHEA-2020:0283 python3-tripleo-common 11.3.3-0.20200121231250.3c68b48.el8ost RHEA-2020:0283 python3-tripleo-common-tests-tempest 0.0.1-0.20191108205902.b692955.el8ost RHEA-2020:0283 python3-tripleoclient 12.3.2-0.20200130192329.78ac810.el8ost RHEA-2020:0283 python3-tripleoclient-heat-installer 12.3.2-0.20200130192329.78ac810.el8ost RHEA-2020:0283 python3-trollius 2.1-7.el8ost RHEA-2020:0283 python3-troveclient 3.0.0-0.20190923161219.960843d.el8ost RHEA-2020:0283 python3-twisted 16.4.1-12.el8ost RHEA-2020:0283 python3-txaio 18.8.1-1.el8ost RHEA-2020:0283 python3-ujson 2.0-0.1.20170206git2f1d487.el8ost.7 RHEA-2020:0283 python3-unittest2 1.1.0-17.el8ost RHEA-2020:0283 python3-versiontools 1.9.1-9.el8ost RHEA-2020:0283 python3-vine 1.2.0-2.el8ost RHEA-2020:0283 python3-vmware-nsxlib 15.0.2-0.20191202203556.d6e1862.el8ost RHEA-2020:0283 python3-voluptuous 0.10.5-5.el8ost RHEA-2020:0283 python3-waitress 1.4.2-1.el8ost RHEA-2020:0283 python3-warlock 1.3.0-9.el8ost RHEA-2020:0283 python3-webob 1.8.5-1.el8ost RHEA-2020:0283 python3-websocket-client 0.54.0-1.el8ost RHEA-2020:0283 python3-websockify 0.8.0-11.el8ost RHEA-2020:0283 python3-webtest 2.0.29-4.el8ost RHEA-2020:0283 python3-werkzeug 0.14.1-5.el8ost RHEA-2020:0283 python3-wrapt 1.11.1-2.el8ost RHEA-2020:0283 python3-wsaccel 0.6.2-10.el8ost RHEA-2020:0283 python3-wsgi_intercept 1.2.2-2.el8ost RHEA-2020:0283 python3-wsme 0.9.4-0.20190913104821.bff9624.el8ost RHEA-2020:0283 python3-yappi 1.0-2.el8ost RHEA-2020:0283 python3-yaql 1.1.3-3.el8ost RHEA-2020:0283 python3-zake 0.2.2-14.el8ost RHEA-2020:0283 python3-zaqar-tests-tempest 0.3.0-0.20191001040621.f6211b4.el8ost RHEA-2020:0283 python3-zaqarclient 1.12.0-0.20190920090052.9038bf6.el8ost RHEA-2020:0283 python3-zeroconf 0.19.1-5.el8ost RHEA-2020:0283 python3-zope-event 4.2.0-8.2.el8ost RHEA-2020:0283 python3-zope-interface 4.4.3-2.el8ost RHEA-2020:0283 qpid-dispatch-router 1.8.0-2.el8 RHEA-2020:0283 qpid-dispatch-tools 1.8.0-2.el8 RHEA-2020:0283 qpid-proton-c 0.29.0-4.el8 RHEA-2020:0283 rabbitmq-server 3.7.23-2.el8ost RHEA-2020:0283 rhosp-director-images 16.0-20200130.1.el8ost RHEA-2020:0428 rhosp-director-images-all 16.0-20200130.1.el8ost RHEA-2020:0428 rhosp-director-images-ipa 16.0-20200130.1.el8ost RHEA-2020:0428 rhosp-director-images-ipa-ppc64le 16.0-20200130.1.el8ost RHEA-2020:0428 rhosp-director-images-ipa-x86_64 16.0-20200130.1.el8ost RHEA-2020:0428 rhosp-director-images-minimal 16.0-20200130.1.el8ost RHEA-2020:0428 rhosp-director-images-ppc64le 16.0-20200130.1.el8ost RHEA-2020:0428 rhosp-director-images-x86_64 16.0-20200130.1.el8ost RHEA-2020:0428 rhosp-openvswitch 2.11-0.5.el8ost RHEA-2020:0283 rhosp-openvswitch-ovn-central 2.11-0.5.el8ost RHEA-2020:0283 rhosp-openvswitch-ovn-common 2.11-0.5.el8ost RHEA-2020:0283 rhosp-openvswitch-ovn-host 2.11-0.5.el8ost RHEA-2020:0283 rhosp-openvswitch-ovn-vtep 2.11-0.5.el8ost RHEA-2020:0283 rhosp-release 16.0.0-1.el8ost RHEA-2020:0283 roboto-fontface-common 0.5.0.0-9.el8ost RHEA-2020:0283 roboto-fontface-fonts 0.5.0.0-9.el8ost RHEA-2020:0283 ruby-augeas 0.5.0-3.el8ost RHEA-2020:0283 ruby-facter 3.9.3-9.el8ost RHEA-2020:0283 ruby-shadow 2.5.0-3.el8ost RHEA-2020:0283 rubygem-pathspec 0.2.1-4.el8ost RHEA-2020:0283 rubygem-rgen 0.6.6-2.1.el8ost RHEA-2020:0283 sysbench 0.4.12-14.el8ost RHEA-2020:0283 tripleo-ansible 0.4.2-0.20200110023759.ee731ba.el8ost RHEA-2020:0283 web-assets-filesystem 5-7.el8ost RHEA-2020:0283 web-assets-httpd 5-7.el8ost RHEA-2020:0283 xstatic-angular-bootstrap-common 2.2.0.0-6.el8ost RHEA-2020:0283 xstatic-angular-fileupload-common 12.0.4.0-10.el8ost RHEA-2020:0283 xstatic-angular-gettext-common 2.3.8.0-2.el8ost RHEA-2020:0283 xstatic-angular-lrdragndrop-common 1.0.2.2-12.el8ost RHEA-2020:0283 xstatic-angular-schema-form-common 0.8.13.0-1.el8ost RHEA-2020:0283 xstatic-angular-uuid-common 0.0.4.0-5.el8ost RHEA-2020:0283 xstatic-angular-vis-common 4.16.0.0-2.el8ost RHEA-2020:0283 xstatic-bootstrap-datepicker-common 1.3.1.0-12.el8ost RHEA-2020:0283 xstatic-bootstrap-scss-common 3.3.7.1-6.el8ost RHEA-2020:0283 xstatic-d3-common 3.5.17.0-6.el8ost RHEA-2020:0283 xstatic-filesaver-common 1.3.2.0-2.el8ost RHEA-2020:0283 xstatic-hogan-common 2.0.0.2-13.el8ost RHEA-2020:0283 xstatic-jasmine-common 2.4.1.1-5.el8ost RHEA-2020:0283 xstatic-jquery-migrate-common 1.2.1.1-13.el8ost RHEA-2020:0283 xstatic-jquery-quicksearch-common 2.0.3.1-13.el8ost RHEA-2020:0283 xstatic-jquery-tablesorter-common 2.14.5.1-13.el8ost RHEA-2020:0283 xstatic-jquery-ui-common 1.12.0.1-5.el8ost RHEA-2020:0283 xstatic-js-yaml-common 3.8.1.0-3.el8ost RHEA-2020:0283 xstatic-jsencrypt-common 2.3.1.1-5.el8ost RHEA-2020:0283 xstatic-json2yaml-common 0.1.1.0-2.el8ost RHEA-2020:0283 xstatic-objectpath-common 1.2.1.0-2.el8ost RHEA-2020:0283 xstatic-rickshaw-common 1.5.0.0-15.el8ost RHEA-2020:0283 xstatic-smart-table-common 1.4.13.2-5.el8ost RHEA-2020:0283 xstatic-spin-common 1.2.5.2-14.el8ost RHEA-2020:0283 xstatic-termjs-common 0.0.7.0-5.el8ost RHEA-2020:0283 xstatic-tv4-common 1.2.7.0-1.el8ost RHEA-2020:0283 yaml-cpp 0.6.1-8.el8ost RHEA-2020:0283 | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/package_manifest/openstack-16-for-rhel-8-ppc64le-rpms_2019-11-12 |
Chapter 16. Managing TLS certificates | Chapter 16. Managing TLS certificates Streams for Apache Kafka supports TLS for encrypted communication between Kafka and Streams for Apache Kafka components. Streams for Apache Kafka establishes encrypted TLS connections for communication between the following components when using Kafka in KRaft mode: Kafka brokers Kafka controllers Kafka brokers and controllers Streams for Apache Kafka operators and Kafka Cruise Control and Kafka brokers Kafka Exporter and Kafka brokers Connections between clients and Kafka brokers use listeners that you must configure to use TLS-encrypted communication. You configure these listeners in the Kafka custom resource and each listener name and port number must be unique within the cluster. Communication between Kafka brokers and Kafka clients is encrypted according to how the tls property is configured for the listener. For more information, see Chapter 14, Setting up client access to a Kafka cluster . The following diagram shows the connections for secure communication. Figure 16.1. KRaft-based Kafka communication secured by TLS encryption The ports shown in the diagram are used as follows: Control plane listener (9090) The internal control plane listener on port 9090 facilitates interbroker communication between Kafka controllers and broker-to-controller communication. Additionally, the Cluster Operator communicates with the controllers through the listener. This listener is not accessible to Kafka clients. Replication listener (9091) Data replication between brokers, as well as internal connections to the brokers from Streams for Apache Kafka operators, Cruise Control, and the Kafka Exporter, use the replication listener on port 9091. This listener is not accessible to Kafka clients. Listeners for client connections (9092 or higher) For TLS-encrypted communication (through configuration of the listener), internal and external clients connect to Kafka brokers. External clients (producers and consumers) connect to the Kafka brokers through the advertised listener port. Important When configuring listeners for client access to brokers, you can use port 9092 or higher (9093, 9094, and so on), but with a few exceptions. The listeners cannot be configured to use the ports reserved for interbroker communication (9090 and 9091), Prometheus metrics (9404), and JMX (Java Management Extensions) monitoring (9999). If you are using ZooKeeper for cluster management, there are TLS connections between ZooKeeper and Kafka brokers and Streams for Apache Kafka operators. The following diagram shows the connections for secure communication when using ZooKeeper. Figure 16.2. Kafka and ZooKeeper communication secured by TLS encryption The ZooKeeper ports are used as follows: ZooKeeper Port (2181) ZooKeeper port for connection to Kafka brokers. Additionally, the Cluster Operator communicates with ZooKeeper through this port. If you are using the Topic Operator in bidirectional mode, it also communicates with ZooKeeper through this port. ZooKeeper internodal communication port (2888) ZooKeeper port for internodal communication between ZooKeeper nodes. ZooKeeper leader election port (3888) ZooKeeper port for leader election among ZooKeeper nodes in a ZooKeeper cluster. 16.1. Internal cluster CA and clients CA To support encryption, each Streams for Apache Kafka component needs its own private keys and public key certificates. All component certificates are signed by an internal CA (certificate authority) called the cluster CA . CA (Certificate Authority) certificates are generated by the Cluster Operator to verify the identities of components and clients. Similarly, each Kafka client application connecting to Streams for Apache Kafka using mTLS needs to use private keys and certificates. A second internal CA, named the clients CA , is used to sign certificates for the Kafka clients. Both the cluster CA and clients CA have a self-signed public key certificate. Kafka brokers are configured to trust certificates signed by either the cluster CA or clients CA. Components that clients do not need to connect to, such as ZooKeeper, only trust certificates signed by the cluster CA. Unless TLS encryption for external listeners is disabled, client applications must trust certificates signed by the cluster CA. This is also true for client applications that perform mTLS authentication. By default, Streams for Apache Kafka automatically generates and renews CA certificates issued by the cluster CA or clients CA. You can configure the management of these CA certificates using Kafka.spec.clusterCa and Kafka.spec.clientsCa properties. Note If you don't want to use the CAs generated by the Cluster Operator, you can install your own cluster and clients CA certificates . Any certificates you provide are not renewed by the Cluster Operator. 16.2. Secrets generated by the operators The Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within a cluster. It also sets up other TLS certificates if you want to enable encryption or mTLS authentication between Kafka brokers and clients. Secrets are created when custom resources are deployed, such as Kafka and KafkaUser . Streams for Apache Kafka uses these secrets to store private and public key certificates for Kafka clusters, clients, and users. The secrets are used for establishing TLS encrypted connections between Kafka brokers, and between brokers and clients. They are also used for mTLS authentication. Cluster and clients secrets are always pairs: one contains the public key and one contains the private key. Cluster secret A cluster secret contains the cluster CA to sign Kafka broker certificates. Connecting clients use the certificate to establish a TLS encrypted connection with a Kafka cluster. The certificate verifies broker identity. Client secret A client secret contains the clients CA for a user to sign its own client certificate. This allows mutual authentication against the Kafka cluster. The broker validates a client's identity through the certificate. User secret A user secret contains a private key and certificate. The secret is created and signed by the clients CA when a new user is created. The key and certificate are used to authenticate and authorize the user when accessing the cluster. Note You can provide Kafka listener certificates for TLS listeners or external listeners that have TLS encryption enabled. Use Kafka listener certificates to incorporate the security infrastructure you already have in place. 16.2.1. TLS authentication using keys and certificates in PEM or PKCS #12 format The secrets created by Streams for Apache Kafka provide private keys and certificates in PEM (Privacy Enhanced Mail) and PKCS #12 (Public-Key Cryptography Standards) formats. PEM and PKCS #12 are OpenSSL-generated key formats for TLS communications using the SSL protocol. You can configure mutual TLS (mTLS) authentication that uses the credentials contained in the secrets generated for a Kafka cluster and user. To set up mTLS, you must first do the following: Configure your Kafka cluster with a listener that uses mTLS Create a KafkaUser that provides client credentials for mTLs When you deploy a Kafka cluster, a <cluster_name>-cluster-ca-cert secret is created with public key to verify the cluster. You use the public key to configure a truststore for the client. When you create a KafkaUser , a <kafka_user_name> secret is created with the keys and certificates to verify the user (client). Use these credentials to configure a keystore for the client. With the Kafka cluster and client set up to use mTLS, you extract credentials from the secrets and add them to your client configuration. PEM keys and certificates For PEM, you add the following to your client configuration: Truststore ca.crt from the <cluster_name>-cluster-ca-cert secret, which is the CA certificate for the cluster. Keystore user.crt from the <kafka_user_name> secret, which is the public certificate of the user. user.key from the <kafka_user_name> secret, which is the private key of the user. PKCS #12 keys and certificates For PKCS #12, you add the following to your client configuration: Truststore ca.p12 from the <cluster_name>-cluster-ca-cert secret, which is the CA certificate for the cluster. ca.password from the <cluster_name>-cluster-ca-cert secret, which is the password to access the public cluster CA certificate. Keystore user.p12 from the <kafka_user_name> secret, which is the public key certificate of the user. user.password from the <kafka_user_name> secret, which is the password to access the public key certificate of the Kafka user. PKCS #12 is supported by Java, so you can add the values of the certificates directly to your Java client configuration. You can also reference the certificates from a secure storage location. With PEM files, you must add the certificates directly to the client configuration in single-line format. Choose a format that's suitable for establishing TLS connections between your Kafka cluster and client. Use PKCS #12 if you are unfamiliar with PEM. Note All keys are 2048 bits in size and, by default, are valid for 365 days from the initial generation. You can change the validity period . 16.2.2. Secrets generated by the Cluster Operator The Cluster Operator generates the following certificates, which are saved as secrets in the OpenShift cluster. Streams for Apache Kafka uses these secrets by default. The cluster CA and clients CA have separate secrets for the private key and public key. <cluster_name> -cluster-ca Contains the private key of the cluster CA. Streams for Apache Kafka and Kafka components use the private key to sign server certificates. <cluster_name> -cluster-ca-cert Contains the public key of the cluster CA. Kafka clients use the public key to verify the identity of the Kafka brokers they are connecting to with TLS server authentication. <cluster_name> -clients-ca Contains the private key of the clients CA. Kafka clients use the private key to sign new user certificates for mTLS authentication when connecting to Kafka brokers. <cluster_name> -clients-ca-cert Contains the public key of the clients CA. Kafka brokers use the public key to verify the identity of clients accessing the Kafka brokers when mTLS authentication is used. Secrets for communication between Streams for Apache Kafka components contain a private key and a public key certificate signed by the cluster CA. <cluster_name> -kafka-brokers Contains the private and public keys for Kafka brokers. <cluster_name> -zookeeper-nodes Contains the private and public keys for ZooKeeper nodes. <cluster_name> -cluster-operator-certs Contains the private and public keys for encrypting communication between the Cluster Operator and Kafka or ZooKeeper. <cluster_name> -entity-topic-operator-certs Contains the private and public keys for encrypting communication between the Topic Operator and Kafka or ZooKeeper. <cluster_name> -entity-user-operator-certs Contains the private and public keys for encrypting communication between the User Operator and Kafka or ZooKeeper. <cluster_name> -cruise-control-certs Contains the private and public keys for encrypting communication between Cruise Control and Kafka or ZooKeeper. <cluster_name> -kafka-exporter-certs Contains the private and public keys for encrypting communication between Kafka Exporter and Kafka or ZooKeeper. Note You can provide your own server certificates and private keys to connect to Kafka brokers using Kafka listener certificates rather than certificates signed by the cluster CA. 16.2.3. Cluster CA secrets Cluster CA secrets are managed by the Cluster Operator in a Kafka cluster. Only the <cluster_name> -cluster-ca-cert secret is required by clients. All other cluster secrets are accessed by Streams for Apache Kafka components. You can enforce this using OpenShift role-based access controls, if necessary. Note The CA certificates in <cluster_name> -cluster-ca-cert must be trusted by Kafka client applications so that they validate the Kafka broker certificates when connecting to Kafka brokers over TLS. Table 16.1. Fields in the <cluster_name>-cluster-ca secret Field Description ca.key The current private key for the cluster CA. Table 16.2. Fields in the <cluster_name>-cluster-ca-cert secret Field Description ca.p12 PKCS #12 store for storing certificates and keys. ca.password Password for protecting the PKCS #12 store. ca.crt The current certificate for the cluster CA. Table 16.3. Fields in the <cluster_name>-kafka-brokers secret Field Description <cluster_name> -kafka- <num> .p12 PKCS #12 store for storing certificates and keys. <cluster_name> -kafka- <num> .password Password for protecting the PKCS #12 store. <cluster_name> -kafka- <num> .crt Certificate for a Kafka broker pod <num> . Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . <cluster_name> -kafka- <num> .key Private key for a Kafka broker pod <num> . Table 16.4. Fields in the <cluster_name>-zookeeper-nodes secret Field Description <cluster_name> -zookeeper- <num> .p12 PKCS #12 store for storing certificates and keys. <cluster_name> -zookeeper- <num> .password Password for protecting the PKCS #12 store. <cluster_name> -zookeeper- <num> .crt Certificate for ZooKeeper node <num> . Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . <cluster_name> -zookeeper- <num> .key Private key for ZooKeeper pod <num> . Table 16.5. Fields in the <cluster_name>-cluster-operator-certs secret Field Description cluster-operator.p12 PKCS #12 store for storing certificates and keys. cluster-operator.password Password for protecting the PKCS #12 store. cluster-operator.crt Certificate for mTLS communication between the Cluster Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . cluster-operator.key Private key for mTLS communication between the Cluster Operator and Kafka or ZooKeeper. Table 16.6. Fields in the <cluster_name>-entity-topic-operator-certs secret Field Description entity-operator.p12 PKCS #12 store for storing certificates and keys. entity-operator.password Password for protecting the PKCS #12 store. entity-operator.crt Certificate for mTLS communication between the Topic Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . entity-operator.key Private key for mTLS communication between the Topic Operator and Kafka or ZooKeeper. Table 16.7. Fields in the <cluster_name>-entity-user-operator-certs secret Field Description entity-operator.p12 PKCS #12 store for storing certificates and keys. entity-operator.password Password for protecting the PKCS #12 store. entity-operator.crt Certificate for mTLS communication between the User Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . entity-operator.key Private key for mTLS communication between the User Operator and Kafka or ZooKeeper. Table 16.8. Fields in the <cluster_name>-cruise-control-certs secret Field Description cruise-control.p12 PKCS #12 store for storing certificates and keys. cruise-control.password Password for protecting the PKCS #12 store. cruise-control.crt Certificate for mTLS communication between Cruise Control and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . cruise-control.key Private key for mTLS communication between the Cruise Control and Kafka or ZooKeeper. Table 16.9. Fields in the <cluster_name>-kafka-exporter-certs secret Field Description kafka-exporter.p12 PKCS #12 store for storing certificates and keys. kafka-exporter.password Password for protecting the PKCS #12 store. kafka-exporter.crt Certificate for mTLS communication between Kafka Exporter and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . kafka-exporter.key Private key for mTLS communication between the Kafka Exporter and Kafka or ZooKeeper. 16.2.4. Clients CA secrets Clients CA secrets are managed by the Cluster Operator in a Kafka cluster. The certificates in <cluster_name> -clients-ca-cert are those which the Kafka brokers trust. The <cluster_name> -clients-ca secret is used to sign the certificates of client applications. This secret must be accessible to the Streams for Apache Kafka components and for administrative access if you are intending to issue application certificates without using the User Operator. You can enforce this using OpenShift role-based access controls, if necessary. Table 16.10. Fields in the <cluster_name>-clients-ca secret Field Description ca.key The current private key for the clients CA. Table 16.11. Fields in the <cluster_name>-clients-ca-cert secret Field Description ca.p12 PKCS #12 store for storing certificates and keys. ca.password Password for protecting the PKCS #12 store. ca.crt The current certificate for the clients CA. 16.2.5. User secrets generated by the User Operator User secrets are managed by the User Operator. When a user is created using the User Operator, a secret is generated using the name of the user. Table 16.12. Fields in the user_name secret Secret name Field within secret Description <user_name> user.p12 PKCS #12 store for storing certificates and keys. user.password Password for protecting the PKCS #12 store. user.crt Certificate for the user, signed by the clients CA user.key Private key for the user 16.2.6. Adding labels and annotations to cluster CA secrets By configuring the clusterCaCert template property in the Kafka custom resource, you can add custom labels and annotations to the Cluster CA secrets created by the Cluster Operator. Labels and annotations are useful for identifying objects and adding contextual information. You configure template properties in Streams for Apache Kafka custom resources. Example template customization to add labels and annotations to secrets apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... 16.2.7. Disabling ownerReference in the CA secrets By default, the cluster and clients CA secrets are created with an ownerReference property that is set to the Kafka custom resource. This means that, when the Kafka custom resource is deleted, the CA secrets are also deleted (garbage collected) by OpenShift. If you want to reuse the CA for a new cluster, you can disable the ownerReference by setting the generateSecretOwnerReference property for the cluster and clients CA secrets to false in the Kafka configuration. When the ownerReference is disabled, CA secrets are not deleted by OpenShift when the corresponding Kafka custom resource is deleted. Example Kafka configuration with disabled ownerReference for cluster and clients CAs apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false # ... Additional resources CertificateAuthority schema reference 16.3. Certificate renewal and validity periods Cluster CA and clients CA certificates are only valid for a limited time period, known as the validity period. This is usually defined as a number of days since the certificate was generated. For CA certificates automatically created by the Cluster Operator, you can configure the validity period of: Cluster CA certificates in Kafka.spec.clusterCa.validityDays Clients CA certificates in Kafka.spec.clientsCa.validityDays The default validity period for both certificates is 365 days. Manually-installed CA certificates should have their own validity periods defined. When a CA certificate expires, components and clients that still trust that certificate will not accept connections from peers whose certificates were signed by the CA private key. The components and clients need to trust the new CA certificate instead. To allow the renewal of CA certificates without a loss of service, the Cluster Operator initiates certificate renewal before the old CA certificates expire. You can configure the renewal period of the certificates created by the Cluster Operator: Cluster CA certificates in Kafka.spec.clusterCa.renewalDays Clients CA certificates in Kafka.spec.clientsCa.renewalDays The default renewal period for both certificates is 30 days. The renewal period is measured backwards, from the expiry date of the current certificate. Validity period against renewal period To make a change to the validity and renewal periods after creating the Kafka cluster, you configure and apply the Kafka custom resource, and manually renew the CA certificates . If you do not manually renew the certificates, the new periods will be used the time the certificate is renewed automatically. Example Kafka configuration for certificate validity and renewal periods apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true # ... The behavior of the Cluster Operator during the renewal period depends on the settings for the generateCertificateAuthority certificate generation properties for the cluster CA and clients CA. true If the properties are set to true , a CA certificate is generated automatically by the Cluster Operator, and renewed automatically within the renewal period. false If the properties are set to false , a CA certificate is not generated by the Cluster Operator. Use this option if you are installing your own certificates . 16.3.1. Renewal process with automatically generated CA certificates The Cluster Operator performs the following processes in this order when renewing CA certificates: Generates a new CA certificate, but retains the existing key. The new certificate replaces the old one with the name ca.crt within the corresponding Secret . Generates new client certificates (for ZooKeeper nodes, Kafka brokers, and the Entity Operator). This is not strictly necessary because the signing key has not changed, but it keeps the validity period of the client certificate in sync with the CA certificate. Restarts ZooKeeper nodes so that they will trust the new CA certificate and use the new client certificates. Restarts Kafka brokers so that they will trust the new CA certificate and use the new client certificates. Restarts the Topic and User Operators so that they will trust the new CA certificate and use the new client certificates. User certificates are signed by the clients CA. User certificates generated by the User Operator are renewed when the clients CA is renewed. 16.3.2. Client certificate renewal The Cluster Operator is not aware of the client applications using the Kafka cluster. When connecting to the cluster, and to ensure they operate correctly, client applications must: Trust the cluster CA certificate published in the <cluster> -cluster-ca-cert Secret. Use the credentials published in their <user-name> Secret to connect to the cluster. The User Secret provides credentials in PEM and PKCS #12 format, or it can provide a password when using SCRAM-SHA authentication. The User Operator creates the user credentials when a user is created. You must ensure clients continue to work after certificate renewal. The renewal process depends on how the clients are configured. If you are provisioning client certificates and keys manually, you must generate new client certificates and ensure the new certificates are used by clients within the renewal period. Failure to do this by the end of the renewal period could result in client applications being unable to connect to the cluster. Note For workloads running inside the same OpenShift cluster and namespace, Secrets can be mounted as a volume so the client Pods construct their keystores and truststores from the current state of the Secrets. For more details on this procedure, see Configuring internal clients to trust the cluster CA . 16.3.3. Manually renewing Cluster Operator-managed CA certificates Cluster and clients CA certificates generated by the Cluster Operator auto-renew at the start of their respective certificate renewal periods. However, you can use the strimzi.io/force-renew annotation to manually renew one or both of these certificates before the certificate renewal period starts. You might do this for security reasons, or if you have changed the renewal or validity periods for the certificates . A renewed certificate uses the same private key as the old certificate. Note If you are using your own CA certificates, the force-renew annotation cannot be used. Instead, follow the procedure for renewing your own CA certificates . Prerequisites The Cluster Operator must be deployed. A Kafka cluster in which CA certificates and private keys are installed. The OpenSSL TLS management tool to check the period of validity for CA certificates. In this procedure, we use a Kafka cluster named my-cluster within the my-project namespace. Procedure Apply the strimzi.io/force-renew annotation to the secret that contains the CA certificate that you want to renew. Renewing the Cluster CA secret oc annotate secret my-cluster-cluster-ca-cert -n my-project strimzi.io/force-renew="true" Renewing the Clients CA secret oc annotate secret my-cluster-clients-ca-cert -n my-project strimzi.io/force-renew="true" At the reconciliation, the Cluster Operator generates new certificates. If maintenance time windows are configured, the Cluster Operator generates the new CA certificate at the first reconciliation within the maintenance time window. Check the period of validity for the new CA certificates. Checking the period of validity for the new cluster CA certificate oc get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -dates Checking the period of validity for the new clients CA certificate oc get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -dates The command returns a notBefore and notAfter date, which is the valid start and end date for the CA certificate. Update client configurations to trust the new cluster CA certificate. See: Section 16.4, "Configuring internal clients to trust the cluster CA" Section 16.5, "Configuring external clients to trust the cluster CA" 16.3.4. Manually recovering from expired Cluster Operator-managed CA certificates The Cluster Operator automatically renews the cluster and clients CA certificates when their renewal periods begin. Nevertheless, unexpected operational problems or disruptions may prevent the renewal process, such as prolonged downtime of the Cluster Operator or unavailability of the Kafka cluster. If CA certificates expire, Kafka cluster components cannot communicate with each other and the Cluster Operator cannot renew the CA certificates without manual intervention. To promptly perform a recovery, follow the steps outlined in this procedure in the order given. You can recover from expired cluster and clients CA certificates. The process involves deleting the secrets containing the expired certificates so that new ones are generated by the Cluster Operator. For more information on the secrets managed in Streams for Apache Kafka, see Section 16.2.2, "Secrets generated by the Cluster Operator" . Note If you are using your own CA certificates and they expire, the process is similar, but you need to renew the CA certificates rather than use certificates generated by the Cluster Operator. Prerequisites The Cluster Operator must be deployed. A Kafka cluster in which CA certificates and private keys are installed. The OpenSSL TLS management tool to check the period of validity for CA certificates. In this procedure, we use a Kafka cluster named my-cluster within the my-project namespace. Procedure Delete the secret containing the expired CA certificate. Deleting the Cluster CA secret oc delete secret my-cluster-cluster-ca-cert -n my-project Deleting the Clients CA secret oc delete secret my-cluster-clients-ca-cert -n my-project Wait for the Cluster Operator to generate new certificates. A new CA cluster certificate to verify the identity of the Kafka brokers is created in a secret of the same name ( my-cluster-cluster-ca-cert ). A new CA clients certificate to verify the identity of Kafka users is created in a secret of the same name ( my-cluster-clients-ca-cert ). Check the period of validity for the new CA certificates. Checking the period of validity for the new cluster CA certificate oc get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -dates Checking the period of validity for the new clients CA certificate oc get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -dates The command returns a notBefore and notAfter date, which is the valid start and end date for the CA certificate. Delete the component pods and secrets that use the CA certificates. Delete the ZooKeeper secret. Wait for the Cluster Operator to detect the missing ZooKeeper secret and recreate it. Delete all ZooKeeper pods. Delete the Kafka secret. Wait for the Cluster Operator to detect the missing Kafka secret and recreate it. Delete all Kafka pods. If you are only recovering the clients CA certificate, you only need to delete the Kafka secret and pods. You can use the following oc command to find resources and also verify that they have been removed. oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource, such as Pod or Secret . Wait for the Cluster Operator to detect the missing Kafka and ZooKeeper pods and recreate them with the updated CA certificates. On reconciliation, the Cluster Operator automatically updates other components to trust the new CA certificates. Verify that there are no issues related to certificate validation in the Cluster Operator log. Update client configurations to trust the new cluster CA certificate. See: Section 16.4, "Configuring internal clients to trust the cluster CA" Section 16.5, "Configuring external clients to trust the cluster CA" 16.3.5. Replacing private keys used by Cluster Operator-managed CA certificates You can replace the private keys used by the cluster CA and clients CA certificates generated by the Cluster Operator. When a private key is replaced, the Cluster Operator generates a new CA certificate for the new private key. Note If you are using your own CA certificates, the force-replace annotation cannot be used. Instead, follow the procedure for renewing your own CA certificates . Prerequisites The Cluster Operator is running. A Kafka cluster in which CA certificates and private keys are installed. Procedure Apply the strimzi.io/force-replace annotation to the Secret that contains the private key that you want to renew. Table 16.13. Commands for replacing private keys Private key for Secret Annotate command Cluster CA <cluster_name>-cluster-ca oc annotate secret <cluster_name>-cluster-ca strimzi.io/force-replace="true" Clients CA <cluster_name>-clients-ca oc annotate secret <cluster_name>-clients-ca strimzi.io/force-replace="true" At the reconciliation the Cluster Operator will: Generate a new private key for the Secret that you annotated Generate a new CA certificate If maintenance time windows are configured, the Cluster Operator will generate the new private key and CA certificate at the first reconciliation within the maintenance time window. Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator. Additional resources Section 16.2, "Secrets generated by the operators" Section 29.1, "Maintenance time windows for rolling updates" 16.4. Configuring internal clients to trust the cluster CA This procedure describes how to configure a Kafka client that resides inside the OpenShift cluster - connecting to a TLS listener - to trust the cluster CA certificate. The easiest way to achieve this for an internal client is to use a volume mount to access the Secrets containing the necessary certificates and keys. Follow the steps to configure trust certificates that are signed by the cluster CA for Java-based Kafka Producer, Consumer, and Streams APIs. Choose the steps to follow according to the certificate format of the cluster CA: PKCS #12 ( .p12 ) or PEM ( .crt ). The steps describe how to mount the Cluster Secret that verifies the identity of the Kafka cluster to the client pod. Prerequisites The Cluster Operator must be running. There needs to be a Kafka resource within the OpenShift cluster. You need a Kafka client application inside the OpenShift cluster that will connect using TLS, and needs to trust the cluster CA certificate. The client application must be running in the same namespace as the Kafka resource. Using PKCS #12 format (.p12) Mount the cluster Secret as a volume when defining the client pod. For example: kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert Here we're mounting the following: The PKCS #12 file into an exact path, which can be configured The password into an environment variable, where it can be used for Java configuration Configure the Kafka client with the following properties: A security protocol option: security.protocol: SSL when using TLS for encryption (with or without mTLS authentication). security.protocol: SASL_SSL when using SCRAM-SHA authentication over TLS. ssl.truststore.location with the truststore location where the certificates were imported. ssl.truststore.password with the password for accessing the truststore. ssl.truststore.type=PKCS12 to identify the truststore type. Using PEM format (.crt) Mount the cluster Secret as a volume when defining the client pod. For example: kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert Use the extracted certificate to configure a TLS connection in clients that use certificates in X.509 format. 16.5. Configuring external clients to trust the cluster CA This procedure describes how to configure a Kafka client that resides outside the OpenShift cluster - connecting to an external listener - to trust the cluster CA certificate. Follow this procedure when setting up the client and during the renewal period, when the old clients CA certificate is replaced. Follow the steps to configure trust certificates that are signed by the cluster CA for Java-based Kafka Producer, Consumer, and Streams APIs. Choose the steps to follow according to the certificate format of the cluster CA: PKCS #12 ( .p12 ) or PEM ( .crt ). The steps describe how to obtain the certificate from the Cluster Secret that verifies the identity of the Kafka cluster. Important The <cluster_name> -cluster-ca-cert secret contains more than one CA certificate during the CA certificate renewal period. Clients must add all of them to their truststores. Prerequisites The Cluster Operator must be running. There needs to be a Kafka resource within the OpenShift cluster. You need a Kafka client application outside the OpenShift cluster that will connect using TLS, and needs to trust the cluster CA certificate. Using PKCS #12 format (.p12) Extract the cluster CA certificate and password from the <cluster_name> -cluster-ca-cert Secret of the Kafka cluster. oc get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12 oc get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password Replace <cluster_name> with the name of the Kafka cluster. Configure the Kafka client with the following properties: A security protocol option: security.protocol: SSL when using TLS. security.protocol: SASL_SSL when using SCRAM-SHA authentication over TLS. ssl.truststore.location with the truststore location where the certificates were imported. ssl.truststore.password with the password for accessing the truststore. This property can be omitted if it is not needed by the truststore. ssl.truststore.type=PKCS12 to identify the truststore type. Using PEM format (.crt) Extract the cluster CA certificate from the <cluster_name> -cluster-ca-cert secret of the Kafka cluster. oc get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate to configure a TLS connection in clients that use certificates in X.509 format. 16.6. Using your own CA certificates and private keys Install and use your own CA certificates and private keys instead of using the defaults generated by the Cluster Operator. You can replace the cluster and clients CA certificates and private keys. You can switch to using your own CA certificates and private keys in the following ways: Install your own CA certificates and private keys before deploying your Kafka cluster Replace the default CA certificates and private keys with your own after deploying a Kafka cluster The steps to replace the default CA certificates and private keys after deploying a Kafka cluster are the same as those used to renew your own CA certificates and private keys. If you use your own certificates, they won't be renewed automatically. You need to renew the CA certificates and private keys before they expire. Renewal options: Renew the CA certificates only Renew CA certificates and private keys (or replace the defaults) 16.6.1. Installing your own CA certificates and private keys Install your own CA certificates and private keys instead of using the cluster and clients CA certificates and private keys generated by the Cluster Operator. By default, Streams for Apache Kafka uses the following cluster CA and clients CA secrets , which are renewed automatically. Cluster CA secrets <cluster_name>-cluster-ca <cluster_name>-cluster-ca-cert Clients CA secrets <cluster_name>-clients-ca <cluster_name>-clients-ca-cert To install your own certificates, use the same names. Prerequisites The Cluster Operator is running. A Kafka cluster is not yet deployed. If you have already deployed a Kafka cluster, you can replace the default CA certificates with your own . Your own X.509 certificates and keys in PEM format for the cluster CA or clients CA. If you want to use a cluster or clients CA which is not a Root CA, you have to include the whole chain in the certificate file. The chain should be in the following order: The cluster or clients CA One or more intermediate CAs The root CA All CAs in the chain should be configured using the X509v3 Basic Constraints extension. Basic Constraints limit the path length of a certificate chain. The OpenSSL TLS management tool for converting certificates. Before you begin The Cluster Operator generates keys and certificates in PEM (Privacy Enhanced Mail) and PKCS #12 (Public-Key Cryptography Standards) formats. You can add your own certificates in either format. Some applications cannot use PEM certificates and support only PKCS #12 certificates. If you don't have a cluster certificate in PKCS #12 format, use the OpenSSL TLS management tool to generate one from your ca.crt file. Example certificate generation command openssl pkcs12 -export -in ca.crt -nokeys -out ca.p12 -password pass:<P12_password> -caname ca.crt Replace <P12_password> with your own password. Procedure Create a new secret that contains the CA certificate. Client secret creation with a certificate in PEM format only oc create secret generic <cluster_name>-clients-ca-cert --from-file=ca.crt=ca.crt Cluster secret creation with certificates in PEM and PKCS #12 format oc create secret generic <cluster_name>-cluster-ca-cert \ --from-file=ca.crt=ca.crt \ --from-file=ca.p12=ca.p12 \ --from-literal=ca.password= P12-PASSWORD Replace <cluster_name> with the name of your Kafka cluster. Create a new secret that contains the private key. oc create secret generic <ca_key_secret> --from-file=ca.key=ca.key Label the secrets. oc label secret <ca_certificate_secret> strimzi.io/kind=Kafka strimzi.io/cluster="<cluster_name>" oc label secret <ca_key_secret> strimzi.io/kind=Kafka strimzi.io/cluster="<cluster_name>" Label strimzi.io/kind=Kafka identifies the Kafka custom resource. Label strimzi.io/cluster="<cluster_name>" identifies the Kafka cluster. Annotate the secrets oc annotate secret <ca_certificate_secret> strimzi.io/ca-cert-generation="<ca_certificate_generation>" oc annotate secret <ca_key_secret> strimzi.io/ca-key-generation="<ca_key_generation>" Annotation strimzi.io/ca-cert-generation="<ca_certificate_generation>" defines the generation of a new CA certificate. Annotation strimzi.io/ca-key-generation="<ca_key_generation>" defines the generation of a new CA key. Start from 0 (zero) as the incremental value ( strimzi.io/ca-cert-generation=0 ) for your own CA certificate. Set a higher incremental value when you renew the certificates. Create the Kafka resource for your cluster, configuring either the Kafka.spec.clusterCa or the Kafka.spec.clientsCa object to not use generated CAs. Example fragment Kafka resource configuring the cluster CA to use certificates you supply for yourself kind: Kafka version: kafka.strimzi.io/v1beta2 spec: # ... clusterCa: generateCertificateAuthority: false Additional resources Section 16.6.2, "Renewing your own CA certificates" Section 16.6.3, "Renewing or replacing CA certificates and private keys with your own" Section 15.3.4, "Providing your own Kafka listener certificates for TLS encryption" 16.6.2. Renewing your own CA certificates If you are using your own CA certificates, you need to renew them manually. The Cluster Operator will not renew them automatically. Renew the CA certificates in the renewal period before they expire. Perform the steps in this procedure when you are renewing CA certificates and continuing with the same private key. If you are renewing your own CA certificates and private keys, see Section 16.6.3, "Renewing or replacing CA certificates and private keys with your own" . The procedure describes the renewal of CA certificates in PEM format. Prerequisites The Cluster Operator is running. You have new cluster or clients X.509 certificates in PEM format. Procedure Update the Secret for the CA certificate. Edit the existing secret to add the new CA certificate and update the certificate generation annotation value. oc edit secret <ca_certificate_secret_name> <ca_certificate_secret_name> is the name of the Secret , which is <kafka_cluster_name> -cluster-ca-cert for the cluster CA certificate and <kafka_cluster_name> -clients-ca-cert for the clients CA certificate. The following example shows a secret for a cluster CA certificate that's associated with a Kafka cluster named my-cluster . Example secret configuration for a cluster CA certificate apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "0" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque 1 Current base64-encoded CA certificate 2 Current CA certificate generation annotation value Encode your new CA certificate into base64. cat <path_to_new_certificate> | base64 Update the CA certificate. Copy the base64-encoded CA certificate from the step as the value for the ca.crt property under data . Increase the value of the CA certificate generation annotation. Update the strimzi.io/ca-cert-generation annotation with a higher incremental value. For example, change strimzi.io/ca-cert-generation=0 to strimzi.io/ca-cert-generation=1 . If the Secret is missing the annotation, the value is treated as 0 , so add the annotation with a value of 1 . When Streams for Apache Kafka generates certificates, the certificate generation annotation is automatically incremented by the Cluster Operator. For your own CA certificates, set the annotations with a higher incremental value. The annotation needs a higher value than the one from the current secret so that the Cluster Operator can roll the pods and update the certificates. The strimzi.io/ca-cert-generation has to be incremented on each CA certificate renewal. Save the secret with the new CA certificate and certificate generation annotation value. Example secret configuration updated with a new CA certificate apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "1" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque 1 New base64-encoded CA certificate 2 New CA certificate generation annotation value On the reconciliation, the Cluster Operator performs a rolling update of ZooKeeper, Kafka, and other components to trust the new CA certificate. If maintenance time windows are configured, the Cluster Operator will roll the pods at the first reconciliation within the maintenance time window. 16.6.3. Renewing or replacing CA certificates and private keys with your own If you are using your own CA certificates and private keys, you need to renew them manually. The Cluster Operator will not renew them automatically. Renew the CA certificates in the renewal period before they expire. You can also use the same procedure to replace the CA certificates and private keys generated by the Streams for Apache Kafka operators with your own. Perform the steps in this procedure when you are renewing or replacing CA certificates and private keys. If you are only renewing your own CA certificates, see Section 16.6.2, "Renewing your own CA certificates" . The procedure describes the renewal of CA certificates and private keys in PEM format. Before going through the following steps, make sure that the CN (Common Name) of the new CA certificate is different from the current one. For example, when the Cluster Operator renews certificates automatically it adds a v<version_number> suffix to identify a version. Do the same with your own CA certificate by adding a different suffix on each renewal. By using a different key to generate a new CA certificate, you retain the current CA certificate stored in the Secret . Prerequisites The Cluster Operator is running. You have new cluster or clients X.509 certificates and keys in PEM format. Procedure Pause the reconciliation of the Kafka custom resource. Annotate the custom resource in OpenShift, setting the pause-reconciliation annotation to true : oc annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation="true" For example, for a Kafka custom resource named my-cluster : oc annotate Kafka my-cluster strimzi.io/pause-reconciliation="true" Check that the status conditions of the custom resource show a change to ReconciliationPaused : oc describe Kafka <name_of_custom_resource> The type condition changes to ReconciliationPaused at the lastTransitionTime . Check the settings for the generateCertificateAuthority properties in your Kafka custom resource. If a property is set to false , a CA certificate is not generated by the Cluster Operator. You require this setting if you are using your own certificates. If needed, edit the existing Kafka custom resource and set the generateCertificateAuthority properties to false . oc edit Kafka <name_of_custom_resource> The following example shows a Kafka custom resource with both cluster and clients CA certificates generation delegated to the user. Example Kafka configuration using your own CA certificates apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: generateCertificateAuthority: false 1 clientsCa: generateCertificateAuthority: false 2 # ... 1 Use your own cluster CA 2 Use your own clients CA Update the Secret for the CA certificate. Edit the existing secret to add the new CA certificate and update the certificate generation annotation value. oc edit secret <ca_certificate_secret_name> <ca_certificate_secret_name> is the name of the Secret , which is <kafka_cluster_name>-cluster-ca-cert for the cluster CA certificate and <kafka_cluster_name>-clients-ca-cert for the clients CA certificate. The following example shows a secret for a cluster CA certificate that's associated with a Kafka cluster named my-cluster . Example secret configuration for a cluster CA certificate apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "0" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque 1 Current base64-encoded CA certificate 2 Current CA certificate generation annotation value Rename the current CA certificate to retain it. Rename the current ca.crt property under data as ca-<date>.crt , where <date> is the certificate expiry date in the format YEAR-MONTH-DAYTHOUR-MINUTE-SECONDZ . For example ca-2023-01-26T17-32-00Z.crt: . Leave the value for the property as it is to retain the current CA certificate. Encode your new CA certificate into base64. cat <path_to_new_certificate> | base64 Update the CA certificate. Create a new ca.crt property under data and copy the base64-encoded CA certificate from the step as the value for ca.crt property. Increase the value of the CA certificate generation annotation. Update the strimzi.io/ca-cert-generation annotation with a higher incremental value. For example, change strimzi.io/ca-cert-generation=0 to strimzi.io/ca-cert-generation=1 . If the Secret is missing the annotation, the value is treated as 0 , so add the annotation with a value of 1 . When Streams for Apache Kafka generates certificates, the certificate generation annotation is automatically incremented by the Cluster Operator. For your own CA certificates, set the annotations with a higher incremental value. The annotation needs a higher value than the one from the current secret so that the Cluster Operator can roll the pods and update the certificates. The strimzi.io/ca-cert-generation has to be incremented on each CA certificate renewal. Save the secret with the new CA certificate and certificate generation annotation value. Example secret configuration updated with a new CA certificate apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 ca-2023-01-26T17-32-00Z.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 2 metadata: annotations: strimzi.io/ca-cert-generation: "1" 3 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque 1 New base64-encoded CA certificate 2 Old base64-encoded CA certificate 3 New CA certificate generation annotation value Update the Secret for the CA key used to sign your new CA certificate. Edit the existing secret to add the new CA key and update the key generation annotation value. oc edit secret <ca_key_name> <ca_key_name> is the name of CA key, which is <kafka_cluster_name>-cluster-ca for the cluster CA key and <kafka_cluster_name>-clients-ca for the clients CA key. The following example shows a secret for a cluster CA key that's associated with a Kafka cluster named my-cluster . Example secret configuration for a cluster CA key apiVersion: v1 kind: Secret data: ca.key: SA1cKF1GFDzOIiPOIUQBHDNFGDFS... 1 metadata: annotations: strimzi.io/ca-key-generation: "0" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca #... type: Opaque 1 Current base64-encoded CA key 2 Current CA key generation annotation value Encode the CA key into base64. cat <path_to_new_key> | base64 Update the CA key. Copy the base64-encoded CA key from the step as the value for the ca.key property under data . Increase the value of the CA key generation annotation. Update the strimzi.io/ca-key-generation annotation with a higher incremental value. For example, change strimzi.io/ca-key-generation=0 to strimzi.io/ca-key-generation=1 . If the Secret is missing the annotation, it is treated as 0 , so add the annotation with a value of 1 . When Streams for Apache Kafka generates certificates, the key generation annotation is automatically incremented by the Cluster Operator. For your own CA certificates together with a new CA key, set the annotation with a higher incremental value. The annotation needs a higher value than the one from the current secret so that the Cluster Operator can roll the pods and update the certificates and keys. The strimzi.io/ca-key-generation has to be incremented on each CA certificate renewal. Save the secret with the new CA key and key generation annotation value. Example secret configuration updated with a new CA key apiVersion: v1 kind: Secret data: ca.key: AB0cKF1GFDzOIiPOIUQWERZJQ0F... 1 metadata: annotations: strimzi.io/ca-key-generation: "1" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca #... type: Opaque 1 New base64-encoded CA key 2 New CA key generation annotation value Resume from the pause. To resume the Kafka custom resource reconciliation, set the pause-reconciliation annotation to false . oc annotate --overwrite Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation="false" You can also do the same by removing the pause-reconciliation annotation. oc annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation- On the reconciliation, the Cluster Operator performs a rolling update of ZooKeeper, Kafka, and other components to trust the new CA certificate. When the rolling update is complete, the Cluster Operator will start a new one to generate new server certificates signed by the new CA key. If maintenance time windows are configured, the Cluster Operator will roll the pods at the first reconciliation within the maintenance time window. Wait until the rolling updates to move to the new CA certificate are complete. Remove any outdated certificates from the secret configuration to ensure that the cluster no longer trusts them. oc edit secret <ca_certificate_secret_name> Example secret configuration with the old certificate removed apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... metadata: annotations: strimzi.io/ca-cert-generation: "1" labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque Start a manual rolling update of your cluster to pick up the changes made to the secret configuration. See Section 29.2, "Starting rolling updates of Kafka and other operands using annotations" . | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false",
"Not Before Not After | | |<--------------- validityDays --------------->| <--- renewalDays --->|",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true",
"annotate secret my-cluster-cluster-ca-cert -n my-project strimzi.io/force-renew=\"true\"",
"annotate secret my-cluster-clients-ca-cert -n my-project strimzi.io/force-renew=\"true\"",
"get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates",
"get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates",
"delete secret my-cluster-cluster-ca-cert -n my-project",
"delete secret my-cluster-clients-ca-cert -n my-project",
"get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates",
"get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates",
"get <resource_type> --all-namespaces | grep <kafka_cluster_name>",
"kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert",
"kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"openssl pkcs12 -export -in ca.crt -nokeys -out ca.p12 -password pass:<P12_password> -caname ca.crt",
"create secret generic <cluster_name>-clients-ca-cert --from-file=ca.crt=ca.crt",
"create secret generic <cluster_name>-cluster-ca-cert --from-file=ca.crt=ca.crt --from-file=ca.p12=ca.p12 --from-literal=ca.password= P12-PASSWORD",
"create secret generic <ca_key_secret> --from-file=ca.key=ca.key",
"label secret <ca_certificate_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"",
"label secret <ca_key_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"",
"annotate secret <ca_certificate_secret> strimzi.io/ca-cert-generation=\"<ca_certificate_generation>\"",
"annotate secret <ca_key_secret> strimzi.io/ca-key-generation=\"<ca_key_generation>\"",
"kind: Kafka version: kafka.strimzi.io/v1beta2 spec: # clusterCa: generateCertificateAuthority: false",
"edit secret <ca_certificate_secret_name>",
"apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque",
"cat <path_to_new_certificate> | base64",
"apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque",
"annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"",
"annotate Kafka my-cluster strimzi.io/pause-reconciliation=\"true\"",
"describe Kafka <name_of_custom_resource>",
"edit Kafka <name_of_custom_resource>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateCertificateAuthority: false 1 clientsCa: generateCertificateAuthority: false 2",
"edit secret <ca_certificate_secret_name>",
"apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque",
"cat <path_to_new_certificate> | base64",
"apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 ca-2023-01-26T17-32-00Z.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 2 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 3 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque",
"edit secret <ca_key_name>",
"apiVersion: v1 kind: Secret data: ca.key: SA1cKF1GFDzOIiPOIUQBHDNFGDFS... 1 metadata: annotations: strimzi.io/ca-key-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque",
"cat <path_to_new_key> | base64",
"apiVersion: v1 kind: Secret data: ca.key: AB0cKF1GFDzOIiPOIUQWERZJQ0F... 1 metadata: annotations: strimzi.io/ca-key-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque",
"annotate --overwrite Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"false\"",
"annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation-",
"edit secret <ca_certificate_secret_name>",
"apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F metadata: annotations: strimzi.io/ca-cert-generation: \"1\" labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/security-str |
Chapter 3. Debug symbols for Red Hat build of OpenJDK 8 | Chapter 3. Debug symbols for Red Hat build of OpenJDK 8 Debug symbols help in investigating a crash in Red Hat build of OpenJDK applications. 3.1. Installing the debug symbols This procedure describes how to install the debug symbols for Red Hat build of OpenJDK. Prerequisites Installed the gdb package on your local sytem. You can issue the sudo yum install gdb command on your CLI to install this package on your local system. Procedure To install the debug symbols, enter the following command: These commands install java-1.8.0-openjdk-debuginfo , java-1.8.0-openjdk-headless-debuginfo , and additional packages that provide debug symbols for Red Hat build of OpenJDK 8 binaries. These packages are not self-sufficient and do not contain executable binaries. Note The debuginfo-install is provided by the yum-utils package. To verify that the debug symbols are installed, enter the following command: 3.2. Checking the installation location of debug symbols This procedure explains how to find the location of debug symbols. Note If the debuginfo package is installed, but you cannot get the installation location of the package, then check if the correct package and java versions are installed. After confirming the versions, check the location of debug symbols again. Prerequisites Installed the gdb package on your local sytem. You can issue the sudo yum install gdb command on your CLI to install this package on your local system. Installed the debug symbols package. See Installing the debug symbols . Procedure To find the location of debug symbols, use gdb with which java commands: Use the following commands to explore the *-debug directory to see all the debug versions of the libraries, which include java , javac , and javah : Note The javac and javah tools are provided by the java-1.8.0-openjdk-devel package. You can install the package using the command: USD sudo debuginfo-install java-1.8.0-openjdk-devel . 3.3. Checking the configuration of debug symbols You can check and set configurations for debug symbols. To get the list of the installed packages, enter the following command: If some debug information packages have not been installed, enter the following command to install the missing packages: Run the following command if you want to hit a specific breakpoint: The above command completes the following tasks: Handles the SIGSEGV error as the JVM uses SEGV for stack overflow check. Sets pending breakpoints to yes . Calls the break statement in JavaCalls::call function. The function to starts the application in HotSpot (libjvm.so). 3.4. Configuring the debug symbols in a fatal error log file When a Java application is down due to a JVM crash, a fatal error log file is generated, for example: hs_error , java_error . These error log files are generated in current working directory of the application. The crash file contains information from the stack. Procedure You can remove all the debug symbols by using the strip -g command. The following code shows an example of non-stripped hs_error file: The following code shows an example of stripped hs_error file: Enter the following command to check that you have the same version of debug symbols and the fatal error log file: Note You can also use the sudo update-alternatives --config 'java' to complete this check. Use the nm command to ensure that libjvm.so has ELF data and text symbols: Additional resources The crash file hs_error is incomplete without the debug symbols installed. For more information, see Java application down due to JVM crash . | [
"sudo yum debuginfo-install java-1.8.0-openjdk sudo yum debuginfo-install java-1.8.0-openjdk-headless",
"gdb which java Reading symbols from /usr/bin/java...Reading symbols from /usr/lib/debug/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.322.b06-2.el8_5/bin/java-1.8.0.322.b06-2.el8_5.x86_64.debug...done. (gdb)",
"gdb which java Reading symbols from /usr/bin/java...Reading symbols from /usr/lib/debug/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.322.b06-2.el8_5/bin/java-1.8.0.322.b06-2.el8_5.x86_64.debug...done. done. (gdb)",
"cd /usr/lib/debug/lib/jvm/java-1.8.0-openjdk-1.8.0.322.b06-2.el8_5",
"tree OJDK 8 version: └── java-1.8.0-openjdk-1.8.0.322.b06-2.el8_5 ├── bin │ │ │── java-java-1.8.0.322.b06-2.el8_5.x86_64.debug │ ├── javac-java-1.8.0.322.b06-2.el8_5.x86_64.debug │ ├── javadoc-java-1.8.0.322.b06-2.el8_5.x86_64.debug │ └── lib ├── jexec-java-1.8.0.322.b06-2.el8_5.x86_64.debug ├── jli │ └── libjli.so-java-1.8.0.322.b06-2.el8_5.x86_64.debug ├── jspawnhelper-java-1.8.0.322.b06-2.el8_5.x86_64.debug │",
"sudo yum list installed | grep 'java-1.8.0-openjdk-debuginfo'",
"sudo yum debuginfo-install glibc-2.28-151.el8.x86_64 libgcc-8.4.1-1.el8.x86_64 libstdc++-8.4.1-1.el8.x86_64 sssd-client-2.4.0-9.el8.x86_64 zlib-1.2.11-17.el8.x86_64",
"gdb -ex 'handle SIGSEGV noprint nostop pass' -ex 'set breakpoint pending on' -ex 'break JavaCalls::call' -ex 'run' --args java ./HelloWorld",
"Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xb83d2a] Unsafe_SetLong+0xda j sun.misc.Unsafe.putLong(Ljava/lang/Object;JJ)V+0 j Crash.main([Ljava/lang/String;)V+8 v ~StubRoutines::call_stub V [libjvm.so+0x6c0e65] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0xc85 V [libjvm.so+0x73cc0d] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .constprop.1]+0x31d V [libjvm.so+0x73fd16] jni_CallStaticVoidMethod+0x186 C [libjli.so+0x48a2] JavaMain+0x472 C [libpthread.so.0+0x9432] start_thread+0xe2",
"Stack: [0x00007ff7e1a44000,0x00007ff7e1b44000], sp=0x00007ff7e1b42850, free space=1018k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xa7ecab] j sun.misc.Unsafe.putAddress(JJ)V+0 j Crash.crash()V+5 j Crash.main([Ljava/lang/String;)V+0 v ~StubRoutines::call_stub V [libjvm.so+0x67133a] V [libjvm.so+0x682bca] V [libjvm.so+0x6968b6] C [libjli.so+0x3989] C [libpthread.so.0+0x7dd5] start_thread+0xc5",
"java -version",
"nm /usr/lib/debug/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.322.b06-2.el8_5/lib/server/libjvm.so-1.8.0.322.b06-2.el8_5.x86_64.debug"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/installing_and_using_red_hat_build_of_openjdk_8_for_rhel/installing-and-configuring-debug-symbols |
Chapter 131. KafkaBridgeTemplate schema reference | Chapter 131. KafkaBridgeTemplate schema reference Used in: KafkaBridgeSpec Property Property type Description deployment DeploymentTemplate Template for Kafka Bridge Deployment . pod PodTemplate Template for Kafka Bridge Pods . apiService InternalServiceTemplate Template for Kafka Bridge API Service . podDisruptionBudget PodDisruptionBudgetTemplate Template for Kafka Bridge PodDisruptionBudget . bridgeContainer ContainerTemplate Template for the Kafka Bridge container. clusterRoleBinding ResourceTemplate Template for the Kafka Bridge ClusterRoleBinding. serviceAccount ResourceTemplate Template for the Kafka Bridge service account. initContainer ContainerTemplate Template for the Kafka Bridge init container. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaBridgeTemplate-reference |
Appendix C. Publishing Module Reference | Appendix C. Publishing Module Reference Several publisher, mapper, and rule modules are configured by default with the Certificate Manager. Section C.1, "Publisher Plug-in Modules" Section C.2, "Mapper Plug-in Modules " Section C.3, "Rule Instances" C.1. Publisher Plug-in Modules This section describes the publisher modules provided for the Certificate Manager. The modules are used by the Certificate Manager to enable and configure specific publisher instances. Section C.1.1, "FileBasedPublisher" Section C.1.2, "LdapCaCertPublisher" Section C.1.3, "LdapUserCertPublisher" Section C.1.4, "LdapCrlPublisher" Section C.1.5, "LdapDeltaCrlPublisher" Section C.1.6, "LdapCertificatePairPublisher" Section C.1.7, "OCSPPublisher" C.1.1. FileBasedPublisher The FileBasedPublisher plug-in module configures a Certificate Manager to publish certificates and CRLs to file. This plug-in can publish base-64 encoded files, DER-encoded files, or both, depending on the checkboxes selected when the publisher is configured. The certificate and CRL content can be viewed by converting the files using the PrettyPrintCert and PrettyPrintCRL tools. For details on viewing the content in base-64 and DER-encoded certificates and CRLs, see Section 9.11, "Viewing Certificates and CRLs Published to File" . By default, the Certificate Manager does not create an instance of the FileBasedPublisher module. Table C.1. FileBasedPublisher Configuration Parameters Parameter Description Publisher ID Specifies a name for the publisher, an alphanumeric string with no spaces. For example, PublishCertsToFile . directory Specifies the complete path to the directory to which the Certificate Manager creates the files; the path can be an absolute path or can be relative to the Certificate System instance directory. For example, /export/CS/certificates . C.1.2. LdapCaCertPublisher The LdapCaCertPublisher plug-in module configures a Certificate Manager to publish or unpublish a CA certificate to the caCertificate;binary attribute of the CA's directory entry. The module converts the object class of the CA's entry to pkiCA or certificationAuthority , if it is not used already. Similarly, it also removes the pkiCA or certificationAuthority object class when unpublishing if the CA has no other certificates. During installation, the Certificate Manager automatically creates an instance of the LdapCaCertPublisher module for publishing the CA certificate to the directory. Table C.2. LdapCaCertPublisher Configuration Parameters Parameter Description caCertAttr Specifies the LDAP directory attribute to publish the CA certificate. This must be caCertificate;binary . caObjectClass Specifies the object class for the CA's entry in the directory. This must be pkiCA or certificationAuthority . C.1.3. LdapUserCertPublisher The LdapUserCertPublisher plug-in module configures a Certificate Manager to publish or unpublish a user certificate to the userCertificate;binary attribute of the user's directory entry. This module is used to publish any end-entity certificate to an LDAP directory. Types of end-entity certificates include SSL client, S/MIME, SSL server, and OCSP responder. During installation, the Certificate Manager automatically creates an instance of the LdapUserCertPublisher module for publishing end-entity certificates to the directory. Table C.3. LdapUserCertPublisher Configuration Parameters Parameter Description certAttr Specifies the directory attribute of the mapped entry to which the Certificate Manager should publish the certificate. This must be userCertificate;binary . C.1.4. LdapCrlPublisher The LdapCrlPublisher plug-in module configures a Certificate Manager to publish or unpublish the CRL to the certificateRevocationList;binary attribute of a directory entry. During installation, the Certificate Manager automatically creates an instance of the LdapCrlPublisher module for publishing CRLs to the directory. Table C.4. LdapCrlPublisher Configuration Parameters Parameter Description crlAttr Specifies the directory attribute of the mapped entry to which the Certificate Manager should publish the CRL. This must be certificateRevocationList;binary . C.1.5. LdapDeltaCrlPublisher The LdapDeltaCrlPublisher plug-in module configures a Certificate Manager to publish or unpublish a delta CRL to the deltaRevocationList attribute of a directory entry. During installation, the Certificate Manager automatically creates an instance of the LdapDeltaCrlPublisher module for publishing CRLs to the directory. Table C.5. LdapDeltaCrlPublisher Configuration Parameters Parameter Description crlAttr Specifies the directory attribute of the mapped entry to which the Certificate Manager should publish the delta CRL. This must be deltaRevocationList;binary . C.1.6. LdapCertificatePairPublisher The LdapCertificatePairPublisher plug-in module configures a Certificate Manager to publish or unpublish a cross-signed certificate to the crossCertPair;binary attribute of the CA's directory entry. The module also converts the object class of the CA's entry to a pkiCA or certificationAuthority , if it is not used already. Similarly, it also removes the pkiCA or certificationAuthority object class when unpublishing if the CA has no other certificates. During installation, the Certificate Manager automatically creates an instance of the LdapCertificatePairPublisher module named LdapCrossCertPairPublisher for publishing the cross-signed certificates to the directory. Table C.6. LdapCertificatePairPublisher Parameters Parameter Description crossCertPairAttr Specifies the LDAP directory attribute to publish the CA certificate. This must be crossCertificatePair;binary . caObjectClass Specifies the object class for the CA's entry in the directory. This must be pkiCA or certificationAuthority . C.1.7. OCSPPublisher The OCSPPublisher plug-in module configures a Certificate Manager to publish its CRLs to an Online Certificate Status Manager. The Certificate Manager does not create any instances of the OCSPPublisher module at installation. Table C.7. OCSPPublisher Parameters Parameter Description host Specifies the fully qualified hostname of the Online Certificate Status Manager. port Specifies the port number on which the Online Certificate Status Manager is listening to the Certificate Manager. This is the Online Certificate Status Manager's SSL port number. path Specifies the path for publishing the CRL. This must be the default path, /ocsp/agent/ocsp/addCRL . enableClientAuth Sets whether to use client (certificate-based) authentication to access the OCSP service. nickname Gives the nickname of the certificate in the OCSP service's database to use for client authentication. This is only used if the enableClientAuth option is set to true. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/publishing_module_reference |
Chapter 11. Enabling encryption on a vSphere cluster | Chapter 11. Enabling encryption on a vSphere cluster You can encrypt your virtual machines after installing OpenShift Container Platform 4.14 on vSphere by draining and shutting down your nodes one at a time. While each virtual machine is shutdown, you can enable encryption in the vCenter web interface. 11.1. Encrypting virtual machines You can encrypt your virtual machines with the following process. You can drain your virtual machines, power them down and encrypt them using the vCenter interface. Finally, you can create a storage class to use the encrypted storage. Prerequisites You have configured a Standard key provider in vSphere. For more information, see Adding a KMS to vCenter Server . Important The Native key provider in vCenter is not supported. For more information, see vSphere Native Key Provider Overview . You have enabled host encryption mode on all of the ESXi hosts that are hosting the cluster. For more information, see Enabling host encryption mode . You have a vSphere account which has all cryptographic privileges enabled. For more information, see Cryptographic Operations Privileges . Procedure Drain and cordon one of your nodes. For detailed instructions on node management, see "Working with Nodes". Shutdown the virtual machine associated with that node in the vCenter interface. Right-click on the virtual machine in the vCenter interface and select VM Policies Edit VM Storage Policies . Select an encrypted storage policy and select OK . Start the encrypted virtual machine in the vCenter interface. Repeat steps 1-5 for all nodes that you want to encrypt. Configure a storage class that uses the encrypted storage policy. For more information about configuring an encrypted storage class, see "VMware vSphere CSI Driver Operator". 11.2. Additional resources Working with nodes vSphere encryption Requirements for encrypting virtual machines | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_vsphere/vsphere-post-installation-encryption |
Chapter 6. File Integrity Operator | Chapter 6. File Integrity Operator 6.1. File Integrity Operator overview The File Integrity Operator continually runs file integrity checks on the cluster nodes. It deploys a DaemonSet that initializes and runs privileged Advanced Intrusion Detection Environment (AIDE) containers on each node, providing a log of files that have been modified since the initial run of the DaemonSet pods. For the latest updates, see the File Integrity Operator release notes . Installing the File Integrity Operator Updating the File Integrity Operator Understanding the File Integrity Operator Configuring the Custom File Integrity Operator Performing advanced Custom File Integrity Operator tasks Troubleshooting the File Integrity Operator 6.2. File Integrity Operator release notes The File Integrity Operator for OpenShift Container Platform continually runs file integrity checks on RHCOS nodes. These release notes track the development of the File Integrity Operator in the OpenShift Container Platform. For an overview of the File Integrity Operator, see Understanding the File Integrity Operator . To access the latest release, see Updating the File Integrity Operator . 6.2.1. OpenShift File Integrity Operator 1.3.5 The following advisory is available for the OpenShift File Integrity Operator 1.3.5: RHBA-2024:10366 OpenShift File Integrity Operator Update This update includes upgraded dependencies in underlying base images. 6.2.2. OpenShift File Integrity Operator 1.3.4 The following advisory is available for the OpenShift File Integrity Operator 1.3.4: RHBA-2024:2946 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.2.2.1. Bug fixes Previously, File Integrity Operator would issue a NodeHasIntegrityFailure alert due to multus certificate rotation. With this release, the alert and failing status are now correctly triggered. ( OCPBUGS-31257 ) 6.2.3. OpenShift File Integrity Operator 1.3.3 The following advisory is available for the OpenShift File Integrity Operator 1.3.3: RHBA-2023:5652 OpenShift File Integrity Operator Bug Fix and Enhancement Update This update addresses a CVE in an underlying dependency. 6.2.3.1. New features and enhancements You can install and use the File Integrity Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see ( Installing the system in FIPS mode ) 6.2.3.2. Bug fixes Previously, some FIO pods with private default mount propagation in combination with hostPath: path: / volume mounts would break the CSI driver relying on multipath. This problem has been fixed and the CSI driver works correctly. ( Some OpenShift Operator pods blocking unmounting of CSI volumes when multipath is in use ) This update resolves CVE-2023-39325. ( CVE-2023-39325 ) 6.2.4. OpenShift File Integrity Operator 1.3.2 The following advisory is available for the OpenShift File Integrity Operator 1.3.2: RHBA-2023:5107 OpenShift File Integrity Operator Bug Fix Update This update addresses a CVE in an underlying dependency. 6.2.5. OpenShift File Integrity Operator 1.3.1 The following advisory is available for the OpenShift File Integrity Operator 1.3.1: RHBA-2023:3600 OpenShift File Integrity Operator Bug Fix Update 6.2.5.1. New features and enhancements FIO now includes kubelet certificates as default files, excluding them from issuing warnings when they're managed by OpenShift Container Platform. ( OCPBUGS-14348 ) FIO now correctly directs email to the address for Red Hat Technical Support. ( OCPBUGS-5023 ) 6.2.5.2. Bug fixes Previously, FIO would not clean up FileIntegrityNodeStatus CRDs when nodes are removed from the cluster. FIO has been updated to correctly clean up node status CRDs on node removal. ( OCPBUGS-4321 ) Previously, FIO would also erroneously indicate that new nodes failed integrity checks. FIO has been updated to correctly show node status CRDs when adding new nodes to the cluster. This provides correct node status notifications. ( OCPBUGS-8502 ) Previously, when FIO was reconciling FileIntegrity CRDs, it would pause scanning until the reconciliation was done. This caused an overly aggressive re-initiatization process on nodes not impacted by the reconciliation. This problem also resulted in unnecessary daemonsets for machine config pools which are unrelated to the FileIntegrity being changed. FIO correctly handles these cases and only pauses AIDE scanning for nodes that are affected by file integrity changes. ( CMP-1097 ) 6.2.5.3. Known Issues In FIO 1.3.1, increasing nodes in IBM Z(R) clusters might result in Failed File Integrity node status. For more information, see Adding nodes in IBM Power(R) clusters can result in failed File Integrity node status . 6.2.6. OpenShift File Integrity Operator 1.2.1 The following advisory is available for the OpenShift File Integrity Operator 1.2.1: RHBA-2023:1684 OpenShift File Integrity Operator Bug Fix Update This release includes updated container dependencies. 6.2.7. OpenShift File Integrity Operator 1.2.0 The following advisory is available for the OpenShift File Integrity Operator 1.2.0: RHBA-2023:1273 OpenShift File Integrity Operator Enhancement Update 6.2.7.1. New features and enhancements The File Integrity Operator Custom Resource (CR) now contains an initialDelay feature that specifies the number of seconds to wait before starting the first AIDE integrity check. For more information, see Creating the FileIntegrity custom resource . The File Integrity Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the File Integrity Operator . 6.2.8. OpenShift File Integrity Operator 1.0.0 The following advisory is available for the OpenShift File Integrity Operator 1.0.0: RHBA-2023:0037 OpenShift File Integrity Operator Bug Fix Update 6.2.9. OpenShift File Integrity Operator 0.1.32 The following advisory is available for the OpenShift File Integrity Operator 0.1.32: RHBA-2022:7095 OpenShift File Integrity Operator Bug Fix Update 6.2.9.1. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand from which namespace the alert originated. Now, the Operator sets the appropriate namespace, providing more information about the alert. ( BZ#2112394 ) Previously, The File Integrity Operator did not update the metrics service on Operator startup, causing the metrics targets to be unreachable. With this release, the File Integrity Operator now ensures the metrics service is updated on Operator startup. ( BZ#2115821 ) 6.2.10. OpenShift File Integrity Operator 0.1.30 The following advisory is available for the OpenShift File Integrity Operator 0.1.30: RHBA-2022:5538 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.2.10.1. New features and enhancements The File Integrity Operator is now supported on the following architectures: IBM Power(R) IBM Z(R) and IBM(R) LinuxONE 6.2.10.2. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand where the alert originated. Now, the Operator sets the appropriate namespace, increasing understanding of the alert. ( BZ#2101393 ) 6.2.11. OpenShift File Integrity Operator 0.1.24 The following advisory is available for the OpenShift File Integrity Operator 0.1.24: RHBA-2022:1331 OpenShift File Integrity Operator Bug Fix 6.2.11.1. New features and enhancements You can now configure the maximum number of backups stored in the FileIntegrity Custom Resource (CR) with the config.maxBackups attribute. This attribute specifies the number of AIDE database and log backups left over from the re-init process to keep on the node. Older backups beyond the configured number are automatically pruned. The default is set to five backups. 6.2.11.2. Bug fixes Previously, upgrading the Operator from versions older than 0.1.21 to 0.1.22 could cause the re-init feature to fail. This was a result of the Operator failing to update configMap resource labels. Now, upgrading to the latest version fixes the resource labels. ( BZ#2049206 ) Previously, when enforcing the default configMap script contents, the wrong data keys were compared. This resulted in the aide-reinit script not being updated properly after an Operator upgrade, and caused the re-init process to fail. Now, daemonSets run to completion and the AIDE database re-init process executes successfully. ( BZ#2072058 ) 6.2.12. OpenShift File Integrity Operator 0.1.22 The following advisory is available for the OpenShift File Integrity Operator 0.1.22: RHBA-2022:0142 OpenShift File Integrity Operator Bug Fix 6.2.12.1. Bug fixes Previously, a system with a File Integrity Operator installed might interrupt the OpenShift Container Platform update, due to the /etc/kubernetes/aide.reinit file. This occurred if the /etc/kubernetes/aide.reinit file was present, but later removed prior to the ostree validation. With this update, /etc/kubernetes/aide.reinit is moved to the /run directory so that it does not conflict with the OpenShift Container Platform update. ( BZ#2033311 ) 6.2.13. OpenShift File Integrity Operator 0.1.21 The following advisory is available for the OpenShift File Integrity Operator 0.1.21: RHBA-2021:4631 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.2.13.1. New features and enhancements The metrics related to FileIntegrity scan results and processing metrics are displayed on the monitoring dashboard on the web console. The results are labeled with the prefix of file_integrity_operator_ . If a node has an integrity failure for more than 1 second, the default PrometheusRule provided in the operator namespace alerts with a warning. The following dynamic Machine Config Operator and Cluster Version Operator related filepaths are excluded from the default AIDE policy to help prevent false positives during node updates: /etc/machine-config-daemon/currentconfig /etc/pki/ca-trust/extracted/java/cacerts /etc/cvo/updatepayloads /root/.kube The AIDE daemon process has stability improvements over v0.1.16, and is more resilient to errors that might occur when the AIDE database is initialized. 6.2.13.2. Bug fixes Previously, when the Operator automatically upgraded, outdated daemon sets were not removed. With this release, outdated daemon sets are removed during the automatic upgrade. 6.2.14. Additional resources Understanding the File Integrity Operator 6.3. File Integrity Operator support 6.3.1. File Integrity Operator lifecycle The File Integrity Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal. 6.3.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 6.4. Installing the File Integrity Operator 6.4.1. Installing the File Integrity Operator using the web console Prerequisites You must have admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the File Integrity Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-file-integrity namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-file-integrity namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-file-integrity project that are reporting issues. 6.4.2. Installing the File Integrity Operator using the CLI Prerequisites You must have admin privileges. Procedure Create a Namespace object YAML file by running: USD oc create -f <file-name>.yaml Example output apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-file-integrity 1 In OpenShift Container Platform 4.15, the pod security label must be set to privileged at the namespace level. Create the OperatorGroup object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity Create the Subscription object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: "stable" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-file-integrity Verify that the File Integrity Operator is up and running: USD oc get deploy -n openshift-file-integrity 6.4.3. Additional resources The File Integrity Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 6.5. Updating the File Integrity Operator As a cluster administrator, you can update the File Integrity Operator on your OpenShift Container Platform cluster. 6.5.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 6.5.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Update channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 6.5.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any updates requiring approval are displayed to Upgrade status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 6.6. Understanding the File Integrity Operator The File Integrity Operator is an OpenShift Container Platform Operator that continually runs file integrity checks on the cluster nodes. It deploys a daemon set that initializes and runs privileged advanced intrusion detection environment (AIDE) containers on each node, providing a status object with a log of files that are modified during the initial run of the daemon set pods. Important Currently, only Red Hat Enterprise Linux CoreOS (RHCOS) nodes are supported. 6.6.1. Creating the FileIntegrity custom resource An instance of a FileIntegrity custom resource (CR) represents a set of continuous file integrity scans for one or more nodes. Each FileIntegrity CR is backed by a daemon set running AIDE on the nodes matching the FileIntegrity CR specification. Procedure Create the following example FileIntegrity CR named worker-fileintegrity.yaml to enable scans on worker nodes: Example FileIntegrity CR apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: "" tolerations: 2 - key: "myNode" operator: "Exists" effect: "NoSchedule" config: 3 name: "myconfig" namespace: "openshift-file-integrity" key: "config" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7 1 Defines the selector for scheduling node scans. 2 Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration allowing running on main and infra nodes is applied. 3 Define a ConfigMap containing an AIDE configuration to use. 4 The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node might be resource intensive, so it can be useful to specify a longer interval. Default is 900 seconds (15 minutes). 5 The maximum number of AIDE database and log backups (leftover from the re-init process) to keep on a node. Older backups beyond this number are automatically pruned by the daemon. Default is set to 5. 6 The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. 7 The running status of the FileIntegrity instance. Statuses are Initializing , Pending , or Active . Initializing The FileIntegrity object is currently initializing or re-initializing the AIDE database. Pending The FileIntegrity deployment is still being created. Active The scans are active and ongoing. Apply the YAML file to the openshift-file-integrity namespace: USD oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity Verification Confirm the FileIntegrity object was created successfully by running the following command: USD oc get fileintegrities -n openshift-file-integrity Example output NAME AGE worker-fileintegrity 14s 6.6.2. Checking the FileIntegrity custom resource status The FileIntegrity custom resource (CR) reports its status through the . status.phase subresource. Procedure To query the FileIntegrity CR status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status.phase }" Example output Active 6.6.3. FileIntegrity custom resource phases Pending - The phase after the custom resource (CR) is created. Active - The phase when the backing daemon set is up and running. Initializing - The phase when the AIDE database is being reinitialized. 6.6.4. Understanding the FileIntegrityNodeStatuses object The scan results of the FileIntegrity CR are reported in another object called FileIntegrityNodeStatuses . USD oc get fileintegritynodestatuses Example output NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s Note It might take some time for the FileIntegrityNodeStatus object results to be available. There is one result object per node. The nodeName attribute of each FileIntegrityNodeStatus object corresponds to the node being scanned. The status of the file integrity scan is represented in the results array, which holds scan conditions. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq The fileintegritynodestatus object reports the latest status of an AIDE run and exposes the status as Failed , Succeeded , or Errored in a status field. USD oc get fileintegritynodestatuses -w Example output NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded 6.6.5. FileIntegrityNodeStatus CR status types These conditions are reported in the results array of the corresponding FileIntegrityNodeStatus CR status: Succeeded - The integrity check passed; the files and directories covered by the AIDE check have not been modified since the database was last initialized. Failed - The integrity check failed; some files or directories covered by the AIDE check have been modified since the database was last initialized. Errored - The AIDE scanner encountered an internal error. 6.6.5.1. FileIntegrityNodeStatus CR success example Example output of a condition with a success status [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:57Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:46:03Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:48Z" } ] In this case, all three scans succeeded and so far there are no other conditions. 6.6.5.2. FileIntegrityNodeStatus CR failure status example To simulate a failure condition, modify one of the files AIDE tracks. For example, modify /etc/resolv.conf on one of the worker nodes: USD oc debug node/ip-10-0-130-192.ec2.internal Example output Creating debug namespace/openshift-debug-node-ldfbj ... Starting pod/ip-10-0-130-192ec2internal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo "# integrity test" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod ... Removing debug namespace/openshift-debug-node-ldfbj ... After some time, the Failed condition is reported in the results array of the corresponding FileIntegrityNodeStatus object. The Succeeded condition is retained, which allows you to pinpoint the time the check failed. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r Alternatively, if you are not mentioning the object name, run: USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq Example output [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:54:14Z" }, { "condition": "Failed", "filesChanged": 1, "lastProbeTime": "2020-09-15T12:57:20Z", "resultConfigMapName": "aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "resultConfigMapNamespace": "openshift-file-integrity" } ] The Failed condition points to a config map that gives more details about what exactly failed and why: USD oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Example output Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none> Due to the config map data size limit, AIDE logs over 1 MB are added to the failure config map as a base64-encoded gzip archive. Use the following command to extract the log: USD oc get cm <failure-cm-name> -o json | jq -r '.data.integritylog' | base64 -d | gunzip Note Compressed logs are indicated by the presence of a file-integrity.openshift.io/compressed annotation key in the config map. 6.6.6. Understanding events Transitions in the status of the FileIntegrity and FileIntegrityNodeStatus objects are logged by events . The creation time of the event reflects the latest transition, such as Initializing to Active , and not necessarily the latest scan result. However, the newest event always reflects the most recent status. USD oc get events --field-selector reason=FileIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active When a node scan fails, an event is created with the add/changed/removed and config map information. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed Changes to the number of added, changed, or removed files results in a new event, even if the status of the node has not transitioned. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 6.7. Configuring the Custom File Integrity Operator 6.7.1. Viewing FileIntegrity object attributes As with any Kubernetes custom resources (CRs), you can run oc explain fileintegrity , and then look at the individual attributes using: USD oc explain fileintegrity.spec USD oc explain fileintegrity.spec.config 6.7.2. Important attributes Table 6.1. Important spec and spec.config attributes Attribute Description spec.nodeSelector A map of key-values pairs that must match with node's labels in order for the AIDE pods to be schedulable on that node. The typical use is to set only a single key-value pair where node-role.kubernetes.io/worker: "" schedules AIDE on all worker nodes, node.openshift.io/os_id: "rhcos" schedules on all Red Hat Enterprise Linux CoreOS (RHCOS) nodes. spec.debug A boolean attribute. If set to true , the daemon running in the AIDE deamon set's pods would output extra information. spec.tolerations Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration is applied, which allows tolerations to run on control plane nodes. spec.config.gracePeriod The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node can be resource intensive, so it can be useful to specify a longer interval. Defaults to 900 , or 15 minutes. maxBackups The maximum number of AIDE database and log backups leftover from the re-init process to keep on a node. Older backups beyond this number are automatically pruned by the daemon. spec.config.name Name of a configMap that contains custom AIDE configuration. If omitted, a default configuration is created. spec.config.namespace Namespace of a configMap that contains custom AIDE configuration. If unset, the FIO generates a default configuration suitable for RHCOS systems. spec.config.key Key that contains actual AIDE configuration in a config map specified by name and namespace . The default value is aide.conf . spec.config.initialDelay The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. This attribute is optional. 6.7.3. Examine the default configuration The default File Integrity Operator configuration is stored in a config map with the same name as the FileIntegrity CR. Procedure To examine the default config, run: USD oc describe cm/worker-fileintegrity 6.7.4. Understanding the default File Integrity Operator configuration Below is an excerpt from the aide.conf key of the config map: @@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\..* PERMS /hostroot/root/ CONTENT_EX The default configuration for a FileIntegrity instance provides coverage for files under the following directories: /root /boot /usr /etc The following directories are not covered: /var /opt Some OpenShift Container Platform-specific excludes under /etc/ 6.7.5. Supplying a custom AIDE configuration Any entries that configure AIDE internal behavior such as DBDIR , LOGDIR , database , and database_out are overwritten by the Operator. The Operator would add a prefix to /hostroot/ before all paths to be watched for integrity changes. This makes reusing existing AIDE configs that might often not be tailored for a containerized environment and start from the root directory easier. Note /hostroot is the directory where the pods running AIDE mount the host's file system. Changing the configuration triggers a reinitializing of the database. 6.7.6. Defining a custom File Integrity Operator configuration This example focuses on defining a custom configuration for a scanner that runs on the control plane nodes based on the default configuration provided for the worker-fileintegrity CR. This workflow might be useful if you are planning to deploy a custom software running as a daemon set and storing its data under /opt/mydaemon on the control plane nodes. Procedure Make a copy of the default configuration. Edit the default configuration with the files that must be watched or excluded. Store the edited contents in a new config map. Point the FileIntegrity object to the new config map through the attributes in spec.config . Extract the default configuration: USD oc extract cm/worker-fileintegrity --keys=aide.conf This creates a file named aide.conf that you can edit. To illustrate how the Operator post-processes the paths, this example adds an exclude directory without the prefix: USD vim aide.conf Example output /hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db Exclude a path specific to control plane nodes: !/opt/mydaemon/ Store the other content in /etc : /hostroot/etc/ CONTENT_EX Create a config map based on this file: USD oc create cm master-aide-conf --from-file=aide.conf Define a FileIntegrity CR manifest that references the config map: apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: "" config: name: master-aide-conf namespace: openshift-file-integrity The Operator processes the provided config map file and stores the result in a config map with the same name as the FileIntegrity object: USD oc describe cm/master-fileintegrity | grep /opt/mydaemon Example output !/hostroot/opt/mydaemon 6.7.7. Changing the custom File Integrity configuration To change the File Integrity configuration, never change the generated config map. Instead, change the config map that is linked to the FileIntegrity object through the spec.name , namespace , and key attributes. 6.8. Performing advanced Custom File Integrity Operator tasks 6.8.1. Reinitializing the database If the File Integrity Operator detects a change that was planned, it might be required to reinitialize the database. Procedure Annotate the FileIntegrity custom resource (CR) with file-integrity.openshift.io/re-init : USD oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init= The old database and log files are backed up and a new database is initialized. The old database and logs are retained on the nodes under /etc/kubernetes , as seen in the following output from a pod spawned using oc debug : Example output ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55 To provide some permanence of record, the resulting config maps are not owned by the FileIntegrity object, so manual cleanup is necessary. As a result, any integrity failures would still be visible in the FileIntegrityNodeStatus object. 6.8.2. Machine config integration In OpenShift Container Platform 4, the cluster node configuration is delivered through MachineConfig objects. You can assume that the changes to files that are caused by a MachineConfig object are expected and should not cause the file integrity scan to fail. To suppress changes to files caused by MachineConfig object updates, the File Integrity Operator watches the node objects; when a node is being updated, the AIDE scans are suspended for the duration of the update. When the update finishes, the database is reinitialized and the scans resume. This pause and resume logic only applies to updates through the MachineConfig API, as they are reflected in the node object annotations. 6.8.3. Exploring the daemon sets Each FileIntegrity object represents a scan on a number of nodes. The scan itself is performed by pods managed by a daemon set. To find the daemon set that represents a FileIntegrity object, run: USD oc -n openshift-file-integrity get ds/aide-worker-fileintegrity To list the pods in that daemon set, run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity To view logs of a single AIDE pod, call oc logs on one of the pods. USD oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6 Example output Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check ... The config maps created by the AIDE daemon are not retained and are deleted after the File Integrity Operator processes them. However, on failure and error, the contents of these config maps are copied to the config map that the FileIntegrityNodeStatus object points to. 6.9. Troubleshooting the File Integrity Operator 6.9.1. General troubleshooting Issue You want to generally troubleshoot issues with the File Integrity Operator. Resolution Enable the debug flag in the FileIntegrity object. The debug flag increases the verbosity of the daemons that run in the DaemonSet pods and run the AIDE checks. 6.9.2. Checking the AIDE configuration Issue You want to check the AIDE configuration. Resolution The AIDE configuration is stored in a config map with the same name as the FileIntegrity object. All AIDE configuration config maps are labeled with file-integrity.openshift.io/aide-conf . 6.9.3. Determining the FileIntegrity object's phase Issue You want to determine if the FileIntegrity object exists and see its current status. Resolution To see the FileIntegrity object's current status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status }" Once the FileIntegrity object and the backing daemon set are created, the status should switch to Active . If it does not, check the Operator pod logs. 6.9.4. Determining that the daemon set's pods are running on the expected nodes Issue You want to confirm that the daemon set exists and that its pods are running on the nodes you expect them to run on. Resolution Run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity Note Adding -owide includes the IP address of the node that the pod is running on. To check the logs of the daemon pods, run oc logs . Check the return value of the AIDE command to see if the check passed or failed. | [
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-file-integrity",
"oc create -f <file-name>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity",
"oc create -f <file-name>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: \"stable\" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc get csv -n openshift-file-integrity",
"oc get deploy -n openshift-file-integrity",
"apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"myNode\" operator: \"Exists\" effect: \"NoSchedule\" config: 3 name: \"myconfig\" namespace: \"openshift-file-integrity\" key: \"config\" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7",
"oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity",
"oc get fileintegrities -n openshift-file-integrity",
"NAME AGE worker-fileintegrity 14s",
"oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status.phase }\"",
"Active",
"oc get fileintegritynodestatuses",
"NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s",
"oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq",
"oc get fileintegritynodestatuses -w",
"NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded",
"[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:57Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:46:03Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:48Z\" } ]",
"oc debug node/ip-10-0-130-192.ec2.internal",
"Creating debug namespace/openshift-debug-node-ldfbj Starting pod/ip-10-0-130-192ec2internal-debug To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo \"# integrity test\" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod Removing debug namespace/openshift-debug-node-ldfbj",
"oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r",
"oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq",
"[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:54:14Z\" }, { \"condition\": \"Failed\", \"filesChanged\": 1, \"lastProbeTime\": \"2020-09-15T12:57:20Z\", \"resultConfigMapName\": \"aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed\", \"resultConfigMapNamespace\": \"openshift-file-integrity\" } ]",
"oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed",
"Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none>",
"oc get cm <failure-cm-name> -o json | jq -r '.data.integritylog' | base64 -d | gunzip",
"oc get events --field-selector reason=FileIntegrityStatus",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active",
"oc get events --field-selector reason=NodeIntegrityStatus",
"LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed",
"oc get events --field-selector reason=NodeIntegrityStatus",
"LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed",
"oc explain fileintegrity.spec",
"oc explain fileintegrity.spec.config",
"oc describe cm/worker-fileintegrity",
"@@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\\..* PERMS /hostroot/root/ CONTENT_EX",
"oc extract cm/worker-fileintegrity --keys=aide.conf",
"vim aide.conf",
"/hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db",
"!/opt/mydaemon/",
"/hostroot/etc/ CONTENT_EX",
"oc create cm master-aide-conf --from-file=aide.conf",
"apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: \"\" config: name: master-aide-conf namespace: openshift-file-integrity",
"oc describe cm/master-fileintegrity | grep /opt/mydaemon",
"!/hostroot/opt/mydaemon",
"oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init=",
"ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55",
"oc -n openshift-file-integrity get ds/aide-worker-fileintegrity",
"oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity",
"oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6",
"Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check",
"oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status }\"",
"oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/file-integrity-operator |
3.16. Host Networking Configurations | 3.16. Host Networking Configurations Common types of networking configurations for virtualization hosts include: Bridge and NIC configuration. This configuration uses a bridge to connect one or more virtual machines (or guests) to the host's NIC. An example of this configuration is the automatic creation of the ovirtmgmt network when installing Red Hat Virtualization Manager. Then, during host installation, the Red Hat Virtualization Manager installs VDSM on the host. The VDSM installation process creates the ovirtmgmt bridge which obtains the host's IP address to enable communication with the Manager. Important Set all hosts in a cluster to use the same IP stack for their management network; either IPv4 or IPv6 only. Dual stack is not supported. Bridge, VLAN, and NIC configuration. A VLAN can be included in the bridge and NIC configuration to provide a secure channel for data transfer over the network and also to support the option to connect multiple bridges to a single NIC using multiple VLANs. Bridge, Bond, and VLAN configuration. A bond creates a logical link that combines the two (or more) physical Ethernet links. The resultant benefits include NIC fault tolerance and potential bandwidth extension, depending on the bonding mode. Multiple Bridge, Multiple VLAN, and NIC configuration. This configuration connects a NIC to multiple VLANs. For example, to connect a single NIC to two VLANs, the network switch can be configured to pass network traffic that has been tagged into one of the two VLANs to one NIC on the host. The host uses two VNICs to separate VLAN traffic, one for each VLAN. Traffic tagged into either VLAN then connects to a separate bridge by having the appropriate VNIC as a bridge member. Each bridge, in turn, connects to multiple virtual machines. Note You can also bond multiple NICs to facilitate a connection with multiple VLANs. Each VLAN in this configuration is defined over the bond comprising the multiple NICs. Each VLAN connects to an individual bridge and each bridge connects to one or more guests. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/host_networking_configurations |
Chapter 9. Creating a Keycloak realm for Ansible Automation Platform | Chapter 9. Creating a Keycloak realm for Ansible Automation Platform Create a realm to manage a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control. Procedure Navigate to Operator Installed Operators . Select the Red Hat Single Sign-On Operator project. Select the Keycloak Realm tab and click Create Keycloak Realm . On the Keycloak Realm form, select YAML view . Edit the YAML file as follows: kind: KeycloakRealm apiVersion: keycloak.org/v1alpha1 metadata: name: ansible-automation-platform-keycloakrealm namespace: rh-sso labels: app: sso realm: ansible-automation-platform spec: realm: id: ansible-automation-platform realm: ansible-automation-platform enabled: true displayName: Ansible Automation Platform instanceSelector: matchLabels: app: sso Field Description metadata.name Set a unique value in metadata for the name of the configuration resource (CR). metadata.namespace Set a unique value in metadata for the name of the configuration resource (CR). metadata.labels.app Set labels to a unique value. This is used when creating the client CR. metadata.labels.realm Set labels to a unique value. This is used when creating the client CR. spec.realm.id Set the realm name and id. These must be the same. spec.realm.realm Set the realm name and id. These must be the same. spec.realm.displayname Set the name to display. Click Create and wait for the process to complete. | [
"kind: KeycloakRealm apiVersion: keycloak.org/v1alpha1 metadata: name: ansible-automation-platform-keycloakrealm namespace: rh-sso labels: app: sso realm: ansible-automation-platform spec: realm: id: ansible-automation-platform realm: ansible-automation-platform enabled: true displayName: Ansible Automation Platform instanceSelector: matchLabels: app: sso"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/proc-create-keycloak-realm_using-a-rhsso-operator |
Chapter 8. DNS [config.openshift.io/v1] | Chapter 8. DNS [config.openshift.io/v1] Description DNS holds cluster-wide information about DNS. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 8.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description baseDomain string baseDomain is the base domain of the cluster. All managed DNS records will be sub-domains of this base. For example, given the base domain openshift.example.com , an API server DNS record may be created for cluster-api.openshift.example.com . Once set, this field cannot be changed. platform object platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. privateZone object privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. publicZone object publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. 8.1.2. .spec.platform Description platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. Type object Required type Property Type Description aws object aws contains DNS configuration specific to the Amazon Web Services cloud provider. type string type is the underlying infrastructure provider for the cluster. Allowed values: "", "AWS". Individual components may not support all platforms, and must handle unrecognized platforms with best-effort defaults. 8.1.3. .spec.platform.aws Description aws contains DNS configuration specific to the Amazon Web Services cloud provider. Type object Property Type Description privateZoneIAMRole string privateZoneIAMRole contains the ARN of an IAM role that should be assumed when performing operations on the cluster's private hosted zone specified in the cluster DNS config. When left empty, no role should be assumed. 8.1.4. .spec.privateZone Description privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.5. .spec.publicZone Description publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object 8.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/dnses DELETE : delete collection of DNS GET : list objects of kind DNS POST : create a DNS /apis/config.openshift.io/v1/dnses/{name} DELETE : delete a DNS GET : read the specified DNS PATCH : partially update the specified DNS PUT : replace the specified DNS /apis/config.openshift.io/v1/dnses/{name}/status GET : read status of the specified DNS PATCH : partially update status of the specified DNS PUT : replace status of the specified DNS 8.2.1. /apis/config.openshift.io/v1/dnses HTTP method DELETE Description delete collection of DNS Table 8.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNS Table 8.2. HTTP responses HTTP code Reponse body 200 - OK DNSList schema 401 - Unauthorized Empty HTTP method POST Description create a DNS Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.4. Body parameters Parameter Type Description body DNS schema Table 8.5. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 202 - Accepted DNS schema 401 - Unauthorized Empty 8.2.2. /apis/config.openshift.io/v1/dnses/{name} Table 8.6. Global path parameters Parameter Type Description name string name of the DNS HTTP method DELETE Description delete a DNS Table 8.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNS Table 8.9. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNS Table 8.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNS Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body DNS schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty 8.2.3. /apis/config.openshift.io/v1/dnses/{name}/status Table 8.15. Global path parameters Parameter Type Description name string name of the DNS HTTP method GET Description read status of the specified DNS Table 8.16. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNS Table 8.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.18. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNS Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body DNS schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/dns-config-openshift-io-v1 |
Appendix B. Contact information | Appendix B. Contact information Red Hat Decision Manager documentation team: [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/author-group |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_4_release_notes/making-open-source-more-inclusive_2.4.57-release-notes |
Chapter 5. KVM Paravirtualized (virtio) Drivers | Chapter 5. KVM Paravirtualized (virtio) Drivers Paravirtualized drivers enhance the performance of guests, decreasing guest I/O latency and increasing throughput almost to bare-metal levels. It is recommended to use the paravirtualized drivers for fully virtualized guests running I/O-heavy tasks and applications. Virtio drivers are KVM's paravirtualized device drivers, available for guest virtual machines running on KVM hosts. These drivers are included in the virtio package. The virtio package supports block (storage) devices and network interface controllers. Note PCI devices are limited by the virtualized system architecture. See Chapter 16, Guest Virtual Machine Device Configuration for additional limitations when using assigned devices. 5.1. Using KVM virtio Drivers for Existing Storage Devices You can modify an existing hard disk device attached to the guest to use the virtio driver instead of the virtualized IDE driver. The example shown in this section edits libvirt configuration files. Note that the guest virtual machine does not need to be shut down to perform these steps, however the change will not be applied until the guest is completely shut down and rebooted. Procedure 5.1. Using KVM virtio drivers for existing devices Ensure that you have installed the appropriate driver ( viostor ), before continuing with this procedure. Run the virsh edit guestname command as root to edit the XML configuration file for your device. For example, virsh edit guest1 . The configuration files are located in the /etc/libvirt/qemu/ directory. Below is a file-based block device using the virtualized IDE driver. This is a typical entry for a virtual machine not using the virtio drivers. Change the entry to use the virtio device by modifying the bus= entry to virtio . Note that if the disk was previously IDE, it has a target similar to hda , hdb , or hdc . When changing to bus=virtio the target needs to be changed to vda , vdb , or vdc accordingly. Remove the address tag inside the disk tags. This must be done for this procedure to work. Libvirt will regenerate the address tag appropriately the time the virtual machine is started. Alternatively, virt-manager , virsh attach-disk or virsh attach-interface can add a new device using the virtio drivers. See the libvirt website for more details on using Virtio: http://www.linux-kvm.org/page/Virtio | [
"<disk type='file' device='disk'> <source file='/var/lib/libvirt/images/disk1.img'/> <target dev='hda' bus='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk>",
"<disk type='file' device='disk'> <source file='/var/lib/libvirt/images/disk1.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-KVM_Para_virtualized_virtio_Drivers |
Chapter 5. Advisories related to this release | Chapter 5. Advisories related to this release The following advisories have been issued to bugfixes and CVE fixes included in this release. RHEA-2021:3699-02 . RHEA-2021:3023-01 . RHEA-2021:3863-03 . Revised on 2024-05-09 16:46:38 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.13/rn-openjdk11013-advisory |
Chapter 5. Identity Brokering APIs | Chapter 5. Identity Brokering APIs Red Hat Single Sign-On can delegate authentication to a parent IDP for login. A typical example of this is the case where you want users to be able to login through a social provider such as Facebook or Google. You can also link existing accounts to a brokered IDP. This section describes some APIs that your applications can use as it pertains to identity brokering. 5.1. Retrieving external IDP tokens Red Hat Single Sign-On allows you to store tokens and responses from the authentication process with the external IDP. For that, you can use the Store Token configuration option on the IDP's settings page. Application code can retrieve these tokens and responses to pull in extra user information, or to securely invoke requests on the external IDP. For example, an application might want to use the Google token to invoke on other Google services and REST APIs. To retrieve a token for a particular identity provider you need to send a request as follows: An application must have authenticated with Red Hat Single Sign-On and have received an access token. This access token will need to have the broker client-level role read-token set. This means that the user must have a role mapping for this role and the client application must have that role within its scope. In this case, given that you are accessing a protected service in Red Hat Single Sign-On, you need to send the access token issued by Red Hat Single Sign-On during the user authentication. In the broker configuration page you can automatically assign this role to newly imported users by turning on the Stored Tokens Readable switch. These external tokens can be re-established by either logging in again through the provider, or using the client initiated account linking API. 5.2. Client initiated account linking Some applications want to integrate with social providers like Facebook, but do not want to provide an option to login via these social providers. Red Hat Single Sign-On offers a browser-based API that applications can use to link an existing user account to a specific external IDP. This is called client-initiated account linking. Account linking can only be initiated by OIDC applications. The way it works is that the application forwards the user's browser to a URL on the Red Hat Single Sign-On server requesting that it wants to link the user's account to a specific external provider (i.e. Facebook). The server initiates a login with the external provider. The browser logs in at the external provider and is redirected back to the server. The server establishes the link and redirects back to the application with a confirmation. There are some preconditions that must be met by the client application before it can initiate this protocol: The desired identity provider must be configured and enabled for the user's realm in the admin console. The user account must already be logged in as an existing user via the OIDC protocol The user must have an account.manage-account or account.manage-account-links role mapping. The application must be granted the scope for those roles within its access token The application must have access to its access token as it needs information within it to generate the redirect URL. To initiate the login, the application must fabricate a URL and redirect the user's browser to this URL. The URL looks like this: Here's a description of each path and query param: provider This is the provider alias of the external IDP that you defined in the Identity Provider section of the admin console. client_id This is the OIDC client id of your application. When you registered the application as a client in the admin console, you had to specify this client id. redirect_uri This is the application callback URL you want to redirect to after the account link is established. It must be a valid client redirect URI pattern. In other words, it must match one of the valid URL patterns you defined when you registered the client in the admin console. nonce This is a random string that your application must generate hash This is a Base64 URL encoded hash. This hash is generated by Base64 URL encoding a SHA_256 hash of nonce + token.getSessionState() + token.getIssuedFor() + provider . The token variable are obtained from the OIDC access token. Basically you are hashing the random nonce, the user session id, the client id, and the identity provider alias you want to access. Here's an example of Java Servlet code that generates the URL to establish the account link. KeycloakSecurityContext session = (KeycloakSecurityContext) httpServletRequest.getAttribute(KeycloakSecurityContext.class.getName()); AccessToken token = session.getToken(); String clientId = token.getIssuedFor(); String nonce = UUID.randomUUID().toString(); MessageDigest md = null; try { md = MessageDigest.getInstance("SHA-256"); } catch (NoSuchAlgorithmException e) { throw new RuntimeException(e); } String input = nonce + token.getSessionState() + clientId + provider; byte[] check = md.digest(input.getBytes(StandardCharsets.UTF_8)); String hash = Base64Url.encode(check); request.getSession().setAttribute("hash", hash); String redirectUri = ...; String accountLinkUrl = KeycloakUriBuilder.fromUri(authServerRootUrl) .path("/auth/realms/{realm}/broker/{provider}/link") .queryParam("nonce", nonce) .queryParam("hash", hash) .queryParam("client_id", clientId) .queryParam("redirect_uri", redirectUri).build(realm, provider).toString(); Why is this hash included? We do this so that the auth server is guaranteed to know that the client application initiated the request and no other rogue app just randomly asked for a user account to be linked to a specific provider. The auth server will first check to see if the user is logged in by checking the SSO cookie set at login. It will then try to regenerate the hash based on the current login and match it up to the hash sent by the application. After the account has been linked, the auth server will redirect back to the redirect_uri . If there is a problem servicing the link request, the auth server may or may not redirect back to the redirect_uri . The browser may just end up at an error page instead of being redirected back to the application. If there is an error condition and the auth server deems it safe enough to redirect back to the client app, an additional error query parameter will be appended to the redirect_uri . Warning While this API guarantees that the application initiated the request, it does not completely prevent CSRF attacks for this operation. The application is still responsible for guarding against CSRF attacks target at itself. 5.2.1. Refreshing external tokens If you are using the external token generated by logging into the provider (i.e. a Facebook or GitHub token), you can refresh this token by re-initiating the account linking API. | [
"GET /auth/realms/{realm}/broker/{provider_alias}/token HTTP/1.1 Host: localhost:8080 Authorization: Bearer <KEYCLOAK ACCESS TOKEN>",
"/{auth-server-root}/auth/realms/{realm}/broker/{provider}/link?client_id={id}&redirect_uri={uri}&nonce={nonce}&hash={hash}",
"KeycloakSecurityContext session = (KeycloakSecurityContext) httpServletRequest.getAttribute(KeycloakSecurityContext.class.getName()); AccessToken token = session.getToken(); String clientId = token.getIssuedFor(); String nonce = UUID.randomUUID().toString(); MessageDigest md = null; try { md = MessageDigest.getInstance(\"SHA-256\"); } catch (NoSuchAlgorithmException e) { throw new RuntimeException(e); } String input = nonce + token.getSessionState() + clientId + provider; byte[] check = md.digest(input.getBytes(StandardCharsets.UTF_8)); String hash = Base64Url.encode(check); request.getSession().setAttribute(\"hash\", hash); String redirectUri = ...; String accountLinkUrl = KeycloakUriBuilder.fromUri(authServerRootUrl) .path(\"/auth/realms/{realm}/broker/{provider}/link\") .queryParam(\"nonce\", nonce) .queryParam(\"hash\", hash) .queryParam(\"client_id\", clientId) .queryParam(\"redirect_uri\", redirectUri).build(realm, provider).toString();"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_developer_guide/identity_brokering_apis |
Chapter 19. Configuring NTP Using ntpd | Chapter 19. Configuring NTP Using ntpd 19.1. Introduction to NTP The Network Time Protocol ( NTP ) enables the accurate dissemination of time and date information in order to keep the time clocks on networked computer systems synchronized to a common reference over the network or the Internet. Many standards bodies around the world have atomic clocks which may be made available as a reference. The satellites that make up the Global Position System contain more than one atomic clock, making their time signals potentially very accurate. Their signals can be deliberately degraded for military reasons. An ideal situation would be where each site has a server, with its own reference clock attached, to act as a site-wide time server. Many devices which obtain the time and date via low frequency radio transmissions or the Global Position System (GPS) exist. However for most situations, a range of publicly accessible time servers connected to the Internet at geographically dispersed locations can be used. These NTP servers provide " Coordinated Universal Time " ( UTC ). Information about these time servers can found at www.pool.ntp.org . Accurate time keeping is important for a number of reasons in IT. In networking for example, accurate time stamps in packets and logs are required. Logs are used to investigate service and security issues and so time stamps made on different systems must be made by synchronized clocks to be of real value. As systems and networks become increasingly faster, there is a corresponding need for clocks with greater accuracy and resolution. In some countries there are legal obligations to keep accurately synchronized clocks. See www.ntp.org for more information. In Linux systems, NTP is implemented by a daemon running in user space. The default NTP user space daemon in Red Hat Enterprise Linux 7 is chronyd . It must be disabled if you want to use the ntpd daemon. See Chapter 18, Configuring NTP Using the chrony Suite for information on chrony . The user space daemon updates the system clock, which is a software clock running in the kernel. Linux uses a software clock as its system clock for better resolution than the typical embedded hardware clock referred to as the " Real Time Clock " (RTC) . See the rtc(4) and hwclock(8) man pages for information on hardware clocks. The system clock can keep time by using various clock sources. Usually, the Time Stamp Counter ( TSC ) is used. The TSC is a CPU register which counts the number of cycles since it was last reset. It is very fast, has a high resolution, and there are no interrupts. On system start, the system clock reads the time and date from the RTC. The time kept by the RTC will drift away from actual time by up to 5 minutes per month due to temperature variations. Hence the need for the system clock to be constantly synchronized with external time references. When the system clock is being synchronized by ntpd , the kernel will in turn update the RTC every 11 minutes automatically. 19.2. NTP Strata NTP servers are classified according to their synchronization distance from the atomic clocks which are the source of the time signals. The servers are thought of as being arranged in layers, or strata, from 1 at the top down to 15. Hence the word stratum is used when referring to a specific layer. Atomic clocks are referred to as Stratum 0 as this is the source, but no Stratum 0 packet is sent on the Internet, all stratum 0 atomic clocks are attached to a server which is referred to as stratum 1. These servers send out packets marked as Stratum 1. A server which is synchronized by means of packets marked stratum n belongs to the , lower, stratum and will mark its packets as stratum n+1 . Servers of the same stratum can exchange packets with each other but are still designated as belonging to just the one stratum, the stratum one below the best reference they are synchronized to. The designation Stratum 16 is used to indicate that the server is not currently synchronized to a reliable time source. Note that by default NTP clients act as servers for those systems in the stratum below them. Here is a summary of the NTP Strata: Stratum 0 Atomic Clocks and their signals broadcast over Radio and GPS GPS (Global Positioning System) Mobile Phone Systems Low Frequency Radio Broadcasts WWVB (Colorado, USA.), JJY-40 and JJY-60 (Japan), DCF77 (Germany), and MSF (United Kingdom) These signals can be received by dedicated devices and are usually connected by RS-232 to a system used as an organizational or site-wide time server. Stratum 1 Computer with radio clock, GPS clock, or atomic clock attached Stratum 2 Reads from stratum 1; Serves to lower strata Stratum 3 Reads from stratum 2; Serves to lower strata Stratum n+1 Reads from stratum n ; Serves to lower strata Stratum 15 Reads from stratum 14; This is the lowest stratum. This process continues down to Stratum 15 which is the lowest valid stratum. The label Stratum 16 is used to indicated an unsynchronized state. 19.3. Understanding NTP The version of NTP used by Red Hat Enterprise Linux is as described in RFC 1305 Network Time Protocol (Version 3) Specification, Implementation and Analysis and RFC 5905 Network Time Protocol Version 4: Protocol and Algorithms Specification This implementation of NTP enables sub-second accuracy to be achieved. Over the Internet, accuracy to 10s of milliseconds is normal. On a Local Area Network (LAN), 1 ms accuracy is possible under ideal conditions. This is because clock drift is now accounted and corrected for, which was not done in earlier, simpler, time protocol systems. A resolution of 233 picoseconds is provided by using 64-bit time stamps. The first 32-bits of the time stamp is used for seconds, the last 32-bits are used for fractions of seconds. NTP represents the time as a count of the number of seconds since 00:00 (midnight) 1 January, 1900 GMT. As 32-bits is used to count the seconds, this means the time will "roll over" in 2036. However NTP works on the difference between time stamps so this does not present the same level of problem as other implementations of time protocols have done. If a hardware clock that is within 68 years of the correct time is available at boot time then NTP will correctly interpret the current date. The NTP4 specification provides for an "Era Number" and an "Era Offset" which can be used to make software more robust when dealing with time lengths of more than 68 years. Do not confuse this with the Unix Year 2038 problem. The NTP protocol provides additional information to improve accuracy. Four time stamps are used to allow the calculation of round-trip time and server response time. In order for a system in its role as NTP client to synchronize with a reference time server, a packet is sent with an "originate time stamp". When the packet arrives, the time server adds a "receive time stamp". After processing the request for time and date information and just before returning the packet, it adds a "transmit time stamp". When the returning packet arrives at the NTP client, a "receive time stamp" is generated. The client can now calculate the total round trip time and by subtracting the processing time derive the actual traveling time. By assuming the outgoing and return trips take equal time, the single-trip delay in receiving the NTP data is calculated. The full NTP algorithm is much more complex than presented here. When a packet containing time information is received it is not immediately responded to, but is first subject to validation checks and then processed together with several other time samples to arrive at an estimate of the time. This is then compared to the system clock to determine the time offset, the difference between the system clock's time and what ntpd has determined the time should be. The system clock is adjusted slowly, at most at a rate of 0.5 ms per second, to reduce this offset by changing the frequency of the counter being used. It will take at least 2000 seconds to adjust the clock by 1 second using this method. This slow change is referred to as slewing and cannot go backwards. If the time offset of the clock is more than 128 ms (the default setting), ntpd can "step" the clock forwards or backwards. If the time offset at system start is greater than 1000 seconds then the user, or an installation script, should make a manual adjustment. See Chapter 3, Configuring the Date and Time . With the -g option to the ntpd command (used by default), any offset at system start will be corrected, but during normal operation only offsets of up to 1000 seconds will be corrected. Some software may fail or produce an error if the time is changed backwards. For systems that are sensitive to step changes in the time, the threshold can be changed to 600 s instead of 128 ms using the -x option (unrelated to the -g option). Using the -x option to increase the stepping limit from 0.128 s to 600 s has a drawback because a different method of controlling the clock has to be used. It disables the kernel clock discipline and may have a negative impact on the clock accuracy. The -x option can be added to the /etc/sysconfig/ntpd configuration file. 19.4. Understanding the Drift File The drift file is used to store the frequency offset between the system clock running at its nominal frequency and the frequency required to remain in synchronization with UTC. If present, the value contained in the drift file is read at system start and used to correct the clock source. Use of the drift file reduces the time required to achieve a stable and accurate time. The value is calculated, and the drift file replaced, once per hour by ntpd . The drift file is replaced, rather than just updated, and for this reason the drift file must be in a directory for which the ntpd has write permissions. 19.5. UTC, Timezones, and DST As NTP is entirely in UTC (Universal Time, Coordinated), Timezones and DST (Daylight Saving Time) are applied locally by the system. The file /etc/localtime is a copy of, or symlink to, a zone information file from /usr/share/zoneinfo . The RTC may be in localtime or in UTC, as specified by the 3rd line of /etc/adjtime , which will be one of LOCAL or UTC to indicate how the RTC clock has been set. Users can easily change this setting using the checkbox System Clock Uses UTC in the Date and Time graphical configuration tool. See Chapter 3, Configuring the Date and Time for information on how to use that tool. Running the RTC in UTC is recommended to avoid various problems when daylight saving time is changed. The operation of ntpd is explained in more detail in the man page ntpd(8) . The resources section lists useful sources of information. See Section 19.20, "Additional Resources" . 19.6. Authentication Options for NTP NTPv4 NTPv4 added support for the Autokey Security Architecture, which is based on public asymmetric cryptography while retaining support for symmetric key cryptography. The Autokey protocol is described in RFC 5906 Network Time Protocol Version 4: Autokey Specification . Unfortunately, it was found later that the protocol has serious security issues, and thus Red Hat strongly recommends to use symmetric keys instead. The man page ntp_auth(5) describes the authentication options and commands for ntpd . An attacker on the network can attempt to disrupt a service by sending NTP packets with incorrect time information. On systems using the public pool of NTP servers, this risk is mitigated by having more than three NTP servers in the list of public NTP servers in /etc/ntp.conf . If only one time source is compromised or spoofed, ntpd will ignore that source. You should conduct a risk assessment and consider the impact of incorrect time on your applications and organization. If you have internal time sources you should consider steps to protect the network over which the NTP packets are distributed. If you conduct a risk assessment and conclude that the risk is acceptable, and the impact to your applications minimal, then you can choose not to use authentication. The broadcast and multicast modes require authentication by default. If you have decided to trust the network then you can disable authentication by using disable auth directive in the ntp.conf file. Alternatively, authentication needs to be configured by using SHA1 or MD5 symmetric keys, or by public (asymmetric) key cryptography using the Autokey scheme. The Autokey scheme for asymmetric cryptography is explained in the ntp_auth(8) man page and the generation of keys is explained in ntp-keygen(8 ). To implement symmetric key cryptography, see Section 19.17.12, "Configuring Symmetric Authentication Using a Key" for an explanation of the key option. 19.7. Managing the Time on Virtual Machines Virtual machines cannot access a real hardware clock and a virtual clock is not stable enough as the stability is dependent on the host systems work load. For this reason, para-virtualized clocks should be provided by the virtualization application in use. On Red Hat Enterprise Linux with KVM the default clock source is kvm-clock . See the KVM guest timing management chapter of the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . 19.8. Understanding Leap Seconds Greenwich Mean Time (GMT) was derived by measuring the solar day, which is dependent on the Earth's rotation. When atomic clocks were first made, the potential for more accurate definitions of time became possible. In 1958, International Atomic Time (TAI) was introduced based on the more accurate and very stable atomic clocks. A more accurate astronomical time, Universal Time 1 (UT1), was also introduced to replace GMT. The atomic clocks are in fact far more stable than the rotation of the Earth and so the two times began to drift apart. For this reason UTC was introduced as a practical measure. It is kept within one second of UT1 but to avoid making many small trivial adjustments it was decided to introduce the concept of a leap second in order to reconcile the difference in a manageable way. The difference between UT1 and UTC is monitored until they drift apart by more than half a second. Then only is it deemed necessary to introduce a one second adjustment, forward or backward. Due to the erratic nature of the Earth's rotational speed, the need for an adjustment cannot be predicted far into the future. The decision as to when to make an adjustment is made by the International Earth Rotation and Reference Systems Service (IERS) . However, these announcements are important only to administrators of Stratum 1 servers because NTP transmits information about pending leap seconds and applies them automatically. 19.9. Understanding the ntpd Configuration File The daemon, ntpd , reads the configuration file at system start or when the service is restarted. The default location for the file is /etc/ntp.conf and you can view the file by entering the following command: The configuration commands are explained briefly later in this chapter, see Section 19.17, "Configure NTP" , and more verbosely in the ntp.conf(5) man page. Here follows a brief explanation of the contents of the default configuration file: The driftfile entry A path to the drift file is specified, the default entry on Red Hat Enterprise Linux is: If you change this be certain that the directory is writable by ntpd . The file contains one value used to adjust the system clock frequency after every system or service start. See Understanding the Drift File for more information. The access control entries The following line sets the default access control restriction: The nomodify options prevents any changes to the configuration. The notrap option prevents ntpdc control message protocol traps. The nopeer option prevents a peer association being formed. The noquery option prevents ntpq and ntpdc queries, but not time queries, from being answered. Important The ntpq and ntpdc queries can be used in amplification attacks, therefore do not remove the noquery option from the restrict default command on publicly accessible systems. See CVE-2013-5211 for more details. Addresses within the range 127.0.0.0/8 are sometimes required by various processes or applications. As the "restrict default" line above prevents access to everything not explicitly allowed, access to the standard loopback address for IPv4 and IPv6 is permitted by means of the following lines: Addresses can be added underneath if specifically required by another application. Hosts on the local network are not permitted because of the "restrict default" line above. To change this, for example to allow hosts from the 192.0.2.0/24 network to query the time and statistics but nothing more, a line in the following format is required: To allow unrestricted access from a specific host, for example 192.0.2.250/32 , a line in the following format is required: A mask of 255.255.255.255 is applied if none is specified. The restrict commands are explained in the ntp_acc(5) man page. The public servers entry By default, the ntp.conf file contains four public server entries: The broadcast multicast servers entry By default, the ntp.conf file contains some commented out examples. These are largely self explanatory. See Section 19.17, "Configure NTP" for the explanation of the specific commands. If required, add your commands just below the examples. Note When the DHCP client program, dhclient , receives a list of NTP servers from the DHCP server, it adds them to ntp.conf and restarts the service. To disable that feature, add PEERNTP=no to /etc/sysconfig/network . 19.10. Understanding the ntpd Sysconfig File The file will be read by the ntpd init script on service start. The default contents is as follows: The -g option enables ntpd to ignore the offset limit of 1000 s and attempt to synchronize the time even if the offset is larger than 1000 s, but only on system start. Without that option ntpd will exit if the time offset is greater than 1000 s. It will also exit after system start if the service is restarted and the offset is greater than 1000 s even with the -g option. 19.11. Disabling chrony In order to use ntpd the default user space daemon, chronyd , must be stopped and disabled. Issue the following command as root : To prevent it restarting at system start, issue the following command as root : To check the status of chronyd , issue the following command: 19.12. Checking if the NTP Daemon is Installed To check if ntpd is installed, enter the following command as root : NTP is implemented by means of the daemon or service ntpd , which is contained within the ntp package. 19.13. Installing the NTP Daemon (ntpd) To install ntpd , enter the following command as root : To enable ntpd at system start, enter the following command as root : 19.14. Checking the Status of NTP To check if ntpd is running and configured to run at system start, issue the following command: To obtain a brief status report from ntpd , issue the following command: 19.15. Configure the Firewall to Allow Incoming NTP Packets The NTP traffic consists of UDP packets on port 123 and needs to be permitted through network and host-based firewalls in order for NTP to function. Check if the firewall is configured to allow incoming NTP traffic for clients using the graphical Firewall Configuration tool. To start the graphical firewall-config tool, press the Super key to enter the Activities Overview, type firewall and then press Enter . The Firewall Configuration window opens. You will be prompted for your user password. To start the graphical firewall configuration tool using the command line, enter the following command as root user: The Firewall Configuration window opens. Note, this command can be run as normal user but you will then be prompted for the root password from time to time. Look for the word "Connected" in the lower left corner. This indicates that the firewall-config tool is connected to the user space daemon, firewalld . 19.15.1. Change the Firewall Settings To immediately change the current firewall settings, ensure the drop-down selection menu labeled Configuration is set to Runtime . Alternatively, to edit the settings to be applied at the system start, or firewall reload, select Permanent from the drop-down list. Note When making changes to the firewall settings in Runtime mode, your selection takes immediate effect when you set or clear the check box associated with the service. You should keep this in mind when working on a system that may be in use by other users. When making changes to the firewall settings in Permanent mode, your selection will only take effect when you reload the firewall or the system restarts. To reload the firewall, select the Options menu and select Reload Firewall . 19.15.2. Open Ports in the Firewall for NTP Packets To permit traffic through the firewall to a certain port, start the firewall-config tool and select the network zone whose settings you want to change. Select the Ports tab and then click the Add button. The Port and Protocol window opens. Enter the port number 123 and select udp from the drop-down list. 19.16. Configure ntpdate Servers The purpose of the ntpdate service is to set the clock during system boot. This was used previously to ensure that the services started after ntpdate would have the correct time and not observe a jump in the clock. The use of ntpdate and the list of step-tickers is considered deprecated and so Red Hat Enterprise Linux 7 uses the -g option to the ntpd command and not ntpdate by default. The ntpdate service in Red Hat Enterprise Linux 7 is beneficial if it is used without the ntpd service or when the -x option is specified for the ntpd command. If ntpd is used with -x but without the ntpdate service enabled, the clock is corrected by step only if the time difference is larger than 600 seconds. With a smaller offset than 600 seconds, the clock is adjusted slowly, approximately 2000 seconds for every corrected second. To check if the ntpdate service is enabled to run at system start, issue the following command: To enable the service to run at system start, issue the following command as root : In Red Hat Enterprise Linux 7 the default /etc/ntp/step-tickers file contains 0.rhel.pool.ntp.org . To configure additional ntpdate servers, using a text editor running as root , edit /etc/ntp/step-tickers . The number of servers listed is not very important as ntpdate will only use this to obtain the date information once when the system is starting. If you have an internal time server then use that host name for the first line. An additional host on the second line as a backup is sensible. The selection of backup servers and whether the second host is internal or external depends on your risk assessment. For example, what is the chance of any problem affecting the first server also affecting the second server? Would connectivity to an external server be more likely to be available than connectivity to internal servers in the event of a network failure disrupting access to the first server? 19.17. Configure NTP To change the default configuration of the NTP service, use a text editor running as root user to edit the /etc/ntp.conf file. This file is installed together with ntpd and is configured to use time servers from the Red Hat pool by default. The man page ntp.conf(5) describes the command options that can be used in the configuration file apart from the access and rate limiting commands which are explained in the ntp_acc(5) man page. 19.17.1. Configure Access Control to an NTP Service To restrict or control access to the NTP service running on a system, make use of the restrict command in the ntp.conf file. See the commented out example: The restrict command takes the following form: where address and mask specify the IP addresses to which you want to apply the restriction, and option is one or more of: ignore - All packets will be ignored, including ntpq and ntpdc queries. kod - a "Kiss-o'-death" packet is to be sent to reduce unwanted queries. limited - do not respond to time service requests if the packet violates the rate limit default values or those specified by the discard command. ntpq and ntpdc queries are not affected. For more information on the discard command and the default values, see Section 19.17.2, "Configure Rate Limiting Access to an NTP Service" . lowpriotrap - traps set by matching hosts to be low priority. nomodify - prevents any changes to the configuration. noquery - prevents ntpq and ntpdc queries, but not time queries, from being answered. nopeer - prevents a peer association being formed. noserve - deny all packets except ntpq and ntpdc queries. notrap - prevents ntpdc control message protocol traps. notrust - deny packets that are not cryptographically authenticated. ntpport - modify the match algorithm to only apply the restriction if the source port is the standard NTP UDP port 123 . version - deny packets that do not match the current NTP version. To configure rate limit access to not respond at all to a query, the respective restrict command has to have the limited option. If ntpd should reply with a KoD packet, the restrict command needs to have both limited and kod options. The ntpq and ntpdc queries can be used in amplification attacks (see CVE-2013-5211 for more details), do not remove the noquery option from the restrict default command on publicly accessible systems. 19.17.2. Configure Rate Limiting Access to an NTP Service To enable rate limiting access to the NTP service running on a system, add the limited option to the restrict command as explained in Section 19.17.1, "Configure Access Control to an NTP Service" . If you do not want to use the default discard parameters, then also use the discard command as explained here. The discard command takes the following form: average - specifies the minimum average packet spacing to be permitted, it accepts an argument in log2 seconds. The default value is 3 ( 2 3 equates to 8 seconds). minimum - specifies the minimum packet spacing to be permitted, it accepts an argument in log2 seconds. The default value is 1 ( 2 1 equates to 2 seconds). monitor - specifies the discard probability for packets once the permitted rate limits have been exceeded. The default value is 3000 seconds. This option is intended for servers that receive 1000 or more requests per second. Examples of the discard command are as follows: 19.17.3. Adding a Peer Address To add the address of a peer, that is to say, the address of a server running an NTP service of the same stratum, make use of the peer command in the ntp.conf file. The peer command takes the following form: where address is an IP unicast address or a DNS resolvable name. The address must only be that of a system known to be a member of the same stratum. Peers should have at least one time source that is different to each other. Peers are normally systems under the same administrative control. 19.17.4. Adding a Server Address To add the address of a server, that is to say, the address of a server running an NTP service of a higher stratum, make use of the server command in the ntp.conf file. The server command takes the following form: where address is an IP unicast address or a DNS resolvable name. The address of a remote reference server or local reference clock from which packets are to be received. 19.17.5. Adding a Broadcast or Multicast Server Address To add a broadcast or multicast address for sending, that is to say, the address to broadcast or multicast NTP packets to, make use of the broadcast command in the ntp.conf file. The broadcast and multicast modes require authentication by default. See Section 19.6, "Authentication Options for NTP" . The broadcast command takes the following form: where address is an IP broadcast or multicast address to which packets are sent. This command configures a system to act as an NTP broadcast server. The address used must be a broadcast or a multicast address. Broadcast address implies the IPv4 address 255.255.255.255 . By default, routers do not pass broadcast messages. The multicast address can be an IPv4 Class D address, or an IPv6 address. The IANA has assigned IPv4 multicast address 224.0.1.1 and IPv6 address FF05::101 (site local) to NTP . Administratively scoped IPv4 multicast addresses can also be used, as described in RFC 2365 Administratively Scoped IP Multicast . 19.17.6. Adding a Manycast Client Address To add a manycast client address, that is to say, to configure a multicast address to be used for NTP server discovery, make use of the manycastclient command in the ntp.conf file. The manycastclient command takes the following form: where address is an IP multicast address from which packets are to be received. The client will send a request to the address and select the best servers from the responses and ignore other servers. NTP communication then uses unicast associations, as if the discovered NTP servers were listed in ntp.conf . This command configures a system to act as an NTP client. Systems can be both client and server at the same time. 19.17.7. Adding a Broadcast Client Address To add a broadcast client address, that is to say, to configure a broadcast address to be monitored for broadcast NTP packets, make use of the broadcastclient command in the ntp.conf file. The broadcastclient command takes the following form: Enables the receiving of broadcast messages. Requires authentication by default. See Section 19.6, "Authentication Options for NTP" . This command configures a system to act as an NTP client. Systems can be both client and server at the same time. 19.17.8. Adding a Manycast Server Address To add a manycast server address, that is to say, to configure an address to allow the clients to discover the server by multicasting NTP packets, make use of the manycastserver command in the ntp.conf file. The manycastserver command takes the following form: Enables the sending of multicast messages. Where address is the address to multicast to. This should be used together with authentication to prevent service disruption. This command configures a system to act as an NTP server. Systems can be both client and server at the same time. 19.17.9. Adding a Multicast Client Address To add a multicast client address, that is to say, to configure a multicast address to be monitored for multicast NTP packets, make use of the multicastclient command in the ntp.conf file. The multicastclient command takes the following form: Enables the receiving of multicast messages. Where address is the address to subscribe to. This should be used together with authentication to prevent service disruption. This command configures a system to act as an NTP client. Systems can be both client and server at the same time. 19.17.10. Configuring the Burst Option Using the burst option against a public server is considered abuse. Do not use this option with public NTP servers. Use it only for applications within your own organization. To increase the average quality of time offset statistics, add the following option to the end of a server command: At every poll interval, when the server responds, the system will send a burst of up to eight packets instead of the usual one packet. For use with the server command to improve the average quality of the time-offset calculations. 19.17.11. Configuring the iburst Option To improve the time taken for initial synchronization, add the following option to the end of a server command: When the server is unreachable, send a burst of eight packets instead of the usual one packet. The packet spacing is normally 2 s; however, the spacing between the first and second packets can be changed with the calldelay command to allow additional time for a modem or ISDN call to complete. For use with the server command to reduce the time taken for initial synchronization. This is now a default option in the configuration file. 19.17.12. Configuring Symmetric Authentication Using a Key To configure symmetric authentication using a key, add the following option to the end of a server or peer command: where number is in the range 1 to 65534 inclusive. This option enables the use of a message authentication code ( MAC ) in packets. This option is for use with the peer , server , broadcast , and manycastclient commands. The option can be used in the /etc/ntp.conf file as follows: See also Section 19.6, "Authentication Options for NTP" . 19.17.13. Configuring the Poll Interval To change the default poll interval, add the following options to the end of a server or peer command: Options to change the default poll interval, where the interval in seconds will be calculated by raising 2 to the power of value , in other words, the interval is expressed in log2 seconds. The default minpoll value is 6, 2 6 equates to 64 s. The default value for maxpoll is 10, which equates to 1024 s. Allowed values are in the range 3 to 17 inclusive, which equates to 8 s to 36.4 h respectively. These options are for use with the peer or server . Setting a shorter maxpoll may improve clock accuracy. 19.17.14. Configuring Server Preference To specify that a particular server should be preferred above others of similar statistical quality, add the following option to the end of a server or peer command: Use this server for synchronization in preference to other servers of similar statistical quality. This option is for use with the peer or server commands. 19.17.15. Configuring the Time-to-Live for NTP Packets To specify that a particular time-to-live ( TTL ) value should be used in place of the default, add the following option to the end of a server or peer command: Specify the time-to-live value to be used in packets sent by broadcast servers and multicast NTP servers. Specify the maximum time-to-live value to use for the "expanding ring search" by a manycast client. The default value is 127 . 19.17.16. Configuring the NTP Version to Use To specify that a particular version of NTP should be used in place of the default, add the following option to the end of a server or peer command: Specify the version of NTP set in created NTP packets. The value can be in the range 1 to 4 . The default is 4 . 19.18. Configuring the Hardware Clock Update The system clock can be used to update the hardware clock, also known as the real-time clock (RTC). This section shows three approaches to the task: Instant one-time update To perform an instant one-time update of the hardware clock, run this command as root: Update on every boot To make the hardware clock update on every boot after executing the ntpdate synchronization utility, do the following: Add the following line to the /etc/sysconfig/ntpdate file: Enable the ntpdate service as root: Note that the ntpdate service uses the NTP servers defined in the /etc/ntp/step-tickers file. Note On virtual machines, the hardware clock will be updated on the boot of the host machine, not of the virtual machine. Update via NTP You can make the hardware clock update every time the system clock is updated by the ntpd or chronyd service: Start the ntpd service as root: To make the behavior persistent across boots, make the service start automatically at the boot time: or Start the chronyd service as root: To make the behavior persistent across boots, make the service start automatically at the boot time: As a result, every time the system clock is synchronized by ntpd or chronyd , the kernel automatically updates the hardware clock in 11 minutes. Warning This approach might not always work because the above mentioned 11-minute mode is not always enabled. As a consequence, the hardware clock does not necessarily get updated on the system clock update. To check the synchronization of the software clock with the hardware clock, use the ntpdc -c kerninfo or the ntptime command as root : The result may look like this: or The result may look like this: To recognize whether the hardware clock is synchronized to the system clock, see the status line in the output. If the line contains the word unsync or UNSYNC , the hardware clock is not synchronized to the system clock. The hardware clock is synchronized to the system clock. The hardware clock is not synchronized to the system clock. 19.19. Configuring Clock Sources To list the available clock sources on your system, issue the following commands: In the above example, the kernel is using kvm-clock . This was selected at boot time as this is a virtual machine. Note that the available clock source is architecture dependent. To override the default clock source, append the clocksource directive to the end of the kernel's GRUB 2 menu entry. Use the grubby tool to make the change. For example, to force the default kernel on a system to use the tsc clock source, enter a command as follows: The --update-kernel parameter also accepts the keyword ALL , or a comma separated list of kernel index numbers. See Chapter 26, Working with GRUB 2 for more information on making changes to the GRUB 2 menu. 19.20. Additional Resources The following sources of information provide additional resources regarding NTP and ntpd . 19.20.1. Installed Documentation ntpd(8) man page - Describes ntpd in detail, including the command-line options. ntp.conf(5) man page - Contains information on how to configure associations with servers and peers. ntpq(8) man page - Describes the NTP query utility for monitoring and querying an NTP server. ntpdc(8) man page - Describes the ntpd utility for querying and changing the state of ntpd . ntp_auth(5) man page - Describes authentication options, commands, and key management for ntpd . ntp_keygen(8) man page - Describes generating public and private keys for ntpd . ntp_acc(5) man page - Describes access control options using the restrict command. ntp_mon(5) man page - Describes monitoring options for the gathering of statistics. ntp_clock(5) man page - Describes commands for configuring reference clocks. ntp_misc(5) man page - Describes miscellaneous options. ntp_decode(5) man page - Lists the status words, event messages and error codes used for ntpd reporting and monitoring. ntpstat(8) man page - Describes a utility for reporting the synchronization state of the NTP daemon running on the local machine. ntptime(8) man page - Describes a utility for reading and setting kernel time variables. tickadj(8) man page - Describes a utility for reading, and optionally setting, the length of the tick. 19.20.2. Useful Websites http://doc.ntp.org/ The NTP Documentation Archive http://www.eecis.udel.edu/~mills/ntp.html Network Time Synchronization Research Project. http://www.eecis.udel.edu/~mills/ntp/html/manyopt.html Information on Automatic Server Discovery in NTPv4 . | [
"~]USD less /etc/ntp.conf",
"driftfile /var/lib/ntp/drift",
"restrict default nomodify notrap nopeer noquery",
"the administrative functions. restrict 127.0.0.1 restrict ::1",
"restrict 192.0.2.0 mask 255.255.255.0 nomodify notrap nopeer",
"restrict 192.0.2.250",
"server 0.rhel.pool.ntp.org iburst server 1.rhel.pool.ntp.org iburst server 2.rhel.pool.ntp.org iburst server 3.rhel.pool.ntp.org iburst",
"Command line options for ntpd OPTIONS=\"-g\"",
"~]# systemctl stop chronyd",
"~]# systemctl disable chronyd",
"~]USD systemctl status chronyd",
"~]# yum install ntp",
"~]# yum install ntp",
"~]# systemctl enable ntpd",
"~]USD systemctl status ntpd",
"~]USD ntpstat unsynchronised time server re-starting polling server every 64 s",
"~]USD ntpstat synchronised to NTP server (10.5.26.10) at stratum 2 time correct to within 52 ms polling server every 1024 s",
"~]# firewall-config",
"~]USD systemctl status ntpdate",
"~]# systemctl enable ntpdate",
"Hosts on local network are less restricted. #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap",
"restrict address [mask mask ] option",
"discard average value minimum value monitor value",
"discard average 4",
"discard average 4 minimum 2",
"peer address",
"server address",
"broadcast address",
"manycastclient address",
"broadcastclient",
"manycastserver address",
"multicastclient address",
"burst",
"iburst",
"key number",
"server 192.168.1.1 key 10 broadcast 192.168.1.255 key 20 manycastclient 239.255.254.254 key 30",
"minpoll value and maxpoll value",
"prefer",
"ttl value",
"version value",
"~]# hwclock --systohc",
"SYNC_HWCLOCK=yes",
"~]# systemctl enable ntpdate.service",
"~]# systemctl start ntpd.service",
"~]# systemctl enable ntpd.service",
"~]# systemctl start chronyd.service",
"~]# systemctl enable chronyd.service",
"~]# ntpdc -c kerninfo",
"pll offset: 0 s pll frequency: 0.000 ppm maximum error: 8.0185 s estimated error: 0 s status: 2001 pll nano pll time constant: 6 precision: 1e-09 s frequency tolerance: 500 ppm",
"~]# ntptime",
"ntp_gettime() returns code 0 (OK) time dcba5798.c3dfe2e0 Mon, May 8 2017 11:34:00.765, (.765135199), maximum error 8010000 us, estimated error 0 us, TAI offset 0 ntp_adjtime() returns code 0 (OK) modes 0x0 (), offset 0.000 us, frequency 0.000 ppm, interval 1 s, maximum error 8010000 us, estimated error 0 us, status 0x2001 (PLL,NANO), time constant 6, precision 0.001 us, tolerance 500 ppm,",
"status 0x2001 (PLL,NANO)",
"status 0x41 (PLL,UNSYNC)",
"~]USD cd /sys/devices/system/clocksource/clocksource0/ clocksource0]USD cat available_clocksource kvm-clock tsc hpet acpi_pm clocksource0]USD cat current_clocksource kvm-clock",
"~]# grubby --args=clocksource=tsc --update-kernel=DEFAULT"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-Configuring_NTP_Using_ntpd |
Chapter 84. ExternalConfigurationVolumeSource schema reference | Chapter 84. ExternalConfigurationVolumeSource schema reference Used in: ExternalConfiguration Property Description configMap Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 configmapvolumesource . ConfigMapVolumeSource name Name of the volume which will be added to the Kafka Connect pods. string secret Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 secretvolumesource . SecretVolumeSource | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-ExternalConfigurationVolumeSource-reference |
18.3.5. Automating the Installation with Kickstart | 18.3.5. Automating the Installation with Kickstart You can allow an installation to run unattended by using Kickstart. A Kickstart file specifies settings for an installation. Once the installation system boots, it can read a Kickstart file and carry out the installation process without any further input from a user. On System z, this also requires a parameter file (optionally an additional configuration file under z/VM). This parameter file must contain the required network options described in Section 26.3, "Installation Network Parameters" and specify a kickstart file using the ks= option. The kickstart file typically resides on the network. The parameter file often also contains the options cmdline and RUNKS=1 to execute the loader without having to log in over the network with SSH (Refer to Section 26.6, "Parameters for Kickstart Installations" ). For further information and details on how to set up a kickstart file, refer to Section 32.3, "Creating the Kickstart File" . 18.3.5.1. Every Installation Produces a Kickstart File The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg . You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Installation_Procedure_Overview-s390-Automating |
Chapter 2. Component overview | Chapter 2. Component overview Red Hat AMQ consists of AMQ Broker, AMQ Interconnect, AMQ Clients, which work together to enable network communication in distributed applications. Red Hat AMQ also includes AMQ Streams, which is based on Apache Kafka. AMQ Streams does not support AMQP or use the Red Hat AMQ Console. AMQ Broker AMQ Interconnect AMQ Clients AMQ Streams 2.1. AMQ Broker AMQ Broker is a full-featured, message-oriented middleware broker. It offers advanced addressing and queueing, fast message persistence, and high availability. AMQ Broker supports multiple protocols and operating environments, enabling you to use your existing assets. AMQ Broker supports integration with Red Hat JBoss Enterprise Application Platform. For more information, see Getting Started with AMQ Broker . 2.2. AMQ Interconnect AMQ Interconnect provides flexible routing of messages between AMQP-enabled endpoints, including clients, brokers, and standalone services. With a single connection into a network of AMQ Interconnect routers, a client can exchange messages with any other endpoint connected to the network. AMQ Interconnect does not use master-slave clusters for high availability. It is typically deployed in topologies of multiple routers with redundant network paths, which it uses to provide reliable connectivity. AMQ Interconnect can distribute messaging workloads across the network and achieve new levels of scale with very low latency. For more information, see Using AMQ Interconnect . 2.3. AMQ Clients AMQ Clients is a suite of AMQP 1.0 and JMS clients, adapters, and libraries. It includes JMS 2.0 support and new, event-driven APIs to enable integration into existing applications. For more information, see AMQ Clients Overview . AMQP clients AMQ C++ AMQ JavaScript AMQ JMS (Java) AMQ .NET AMQ Python AMQ Ruby JMS clients AMQ JMS (AMQP 1.0) AMQ Core Protocol JMS AMQ OpenWire JMS Adapters and libraries AMQ JMS Pool AMQ Spring Boot Starter 2.4. AMQ Streams AMQ Streams is a massively-scalable, distributed, and high-performance data streaming platform based on Apache Kafka. AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. It can also be installed on Red Hat Enterprise Linux. AMQ Streams provides container images and Operators for running Kafka on OpenShift. AMQ Streams Operators are fundamental to the running of AMQ Streams. The Operators provided with AMQ Streams are purpose-built with specialist operational knowledge to effectively manage Kafka. Cluster Operator Deploys and manages Apache Kafka clusters, Kafka Connect, Kafka MirrorMaker, Kafka Bridge, Kafka Exporter, and the Entity Operator Entity Operator Comprises the Topic Operator and User Operator Topic Operator Manages Kafka topics User Operator Manages Kafka users representing the client applications that access the Kafka brokers For more information, see AMQ Streams on OpenShift Overview . 2.5. Component compatibility The following table lists the supported languages, platforms, and protocols of AMQ components. Note that any components supporting the same protocol can interoperate, even if their languages and platforms differ. For instance, AMQ Python can communicate with AMQ JMS. Table 2.1. AMQ component compatibility Component Languages Platforms Protocols AMQ Broker - JVM AMQP 1.0, MQTT, OpenWire, STOMP, Core Protocol AMQ Interconnect - Linux AMQP 1.0 AMQ C++ C++ Linux, Windows AMQP 1.0 AMQ JavaScript JavaScript Node.js, browsers AMQP 1.0 AMQ JMS Java JVM AMQP 1.0 AMQ .NET C# .NET AMQP 1.0 AMQ Python Python Linux AMQP 1.0 AMQ Ruby Ruby Linux AMQP 1.0 AMQ Spring Boot Starter Java JVM AMQP 1.0 AMQ Core Protocol JMS Java JVM Core Protocol AMQ OpenWire JMS Java JVM OpenWire AMQ JMS Pool Java JVM - For more information, see Red Hat AMQ 7 Supported Configurations . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/introducing_red_hat_amq_7/components |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue . Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/analyzing_your_cost_data/proc-providing-feedback-on-redhat-documentation |
Package Manifest | Package Manifest Red Hat Satellite 6.15 Package Listing for Red Hat Satellite Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/package_manifest/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/backing_up_and_restoring_the_undercloud_and_control_plane_nodes/making-open-source-more-inclusive |
4.6. Configuring SSL Connections with the TPS | 4.6. Configuring SSL Connections with the TPS By default, the TPS communicates with the Enterprise Security Client over standard HTTP. It is also possible, and in many situations desirable, to secure the TPS-client communications by using HTTP over SSL (HTTPS). The Enterprise Security Client has to have the CA certificate for the CA which issued the TPS's certificates in order to trust the TPS connection. From there, the Enterprise Security Client can be configured to connect to the TPS's SSL certificate. Download the CA certificate used by the TPS. Open the CA's end user pages in a web browser. Click the Retrieval tab at the top. In the left menu, click the Import CA Certificate Chain link. Choose the radio button to download the chain as a file, and remember the location and name of the downloaded file. Open the Enterprise Security Client. Import the CA certificate. Click the View Certificates button. Click the Authorities tab. Click Import . Browse to the CA certificate chain file, and select it. When prompted, confirm that you want to trust the CA. The Enterprise Security Client needs to be configured to communicate with the TPS over SSL; this is done by setting the Phone Home URL , which is the default URL the Enterprise Security Client uses to connect to the TPS. Insert a new, blank token into the machine. Blank tokens are unformatted, so they do not have an existing Phone Home URL, and the URL must be set manually. Formatted tokens (tokens can be formatted by the manufacturer or by your IT department) already have the URL set, and thus do not prompt to set the Phone Home URL. Fill in the new TPS URL with the SSL port information. For example: Click the Test button to send a message to the TPS. If the request is successful, the client opens a dialog box saying that the Phone Home URL was successfully obtained. | [
"http s ://server.example.com: 9444/ca/ee/ca/",
"http s ://server.example.com: 7890 /cgi-bin/home/index.cgi"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/configuring-ssl |
Nodes | Nodes OpenShift Container Platform 4.15 Configuring and managing nodes in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi",
"oc project <project-name>",
"oc get pods",
"oc get pods",
"NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>",
"oc adm top pods",
"oc adm top pods -n openshift-console",
"NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi",
"oc adm top pod --selector=''",
"oc adm top pod --selector='name=my-pod'",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }",
"oc create -f <file_or_dir_path>",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod",
"oc create -f </path/to/file> -n <project_name>",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1",
"oc create -f pod-disruption-budget.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1",
"oc create -f <file-name>.yaml",
"oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75",
"horizontalpodautoscaler.autoscaling/hello-node autoscaled",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0",
"oc get deployment hello-node",
"NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config",
"type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60",
"behavior: scaleDown: stabilizationWindowSeconds: 300",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: minReplicas: 20 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled",
"oc edit hpa hpa-resource-metrics-memory",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior: '{\"ScaleUp\":{\"StabilizationWindowSeconds\":0,\"SelectPolicy\":\"Max\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":15},{\"Type\":\"Percent\",\"Value\":100,\"PeriodSeconds\":15}]}, \"ScaleDown\":{\"StabilizationWindowSeconds\":300,\"SelectPolicy\":\"Min\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":60},{\"Type\":\"Percent\",\"Value\":10,\"PeriodSeconds\":60}]}}'",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc autoscale <object_type>/<name> \\ 1 --min <number> \\ 2 --max <number> \\ 3 --cpu-percent=<percent> 4",
"oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11",
"oc create -f <file-name>.yaml",
"oc get hpa cpu-autoscale",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler",
"Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none>",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max",
"oc create -f <file-name>.yaml",
"oc create -f hpa.yaml",
"horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created",
"oc get hpa hpa-resource-metrics-memory",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m",
"oc describe hpa hpa-resource-metrics-memory",
"Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind \"ReplicationController\" in group \"apps\" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind \"ReplicationController\" in group \"apps\"",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc describe hpa <pod-name>",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc get all -n openshift-vertical-pod-autoscaler",
"NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s",
"resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi",
"resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: \"2021-04-21T19:29:49Z\" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: \"142172\" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Initial\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Off\" 3",
"oc get vpa <vpa-name> --output yaml",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\"",
"spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15",
"apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 resources: requests: cpu: 80m memory: 350M",
"apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 resources: requests: cpu: 40m memory: 150Mi",
"apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true resources: requests: cpu: 75m memory: 275Mi",
"apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name>",
"apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true",
"oc get pods",
"NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: \"apps/v1\" kind: Deployment 2 name: frontend",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\" recommenders: 5 - name: my-recommender",
"oc create -f <file-name>.yaml",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod",
"apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: \"app=scalable-cr\" 1 replicas: 1",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: \"Auto\"",
"oc delete namespace openshift-vertical-pod-autoscaler",
"oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io",
"oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io",
"oc delete crd verticalpodautoscalers.autoscaling.k8s.io",
"oc delete MutatingWebhookConfiguration vpa-webhook-config",
"oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB",
"apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com",
"oc create sa <service_account_name> -n <your_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3",
"oc apply -f service-account-token-secret.yaml",
"oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1",
"ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA",
"curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2",
"apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1",
"kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f <file-name>.yaml",
"oc get secrets",
"NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m",
"oc describe secret my-cert",
"Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes",
"apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers",
"oc apply -f aws-provider.yaml",
"mkdir credentialsrequest-dir-aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"arn:*:secretsmanager:*:*:secret:testSecret-??????\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider",
"oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'",
"https://<oidc_provider_name>",
"ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output",
"2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds",
"oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" objectType: \"secretsmanager\"",
"oc create -f secret-provider-class-aws.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testSecret",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret",
"<secret_value>",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers",
"oc apply -f aws-provider.yaml",
"mkdir credentialsrequest-dir-aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"ssm:GetParameter\" - \"ssm:GetParameters\" effect: Allow resource: \"arn:*:ssm:*:*:parameter/testParameter*\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider",
"oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'",
"https://<oidc_provider_name>",
"ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output",
"2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds",
"oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testParameter\" objectType: \"ssmparameter\"",
"oc create -f secret-provider-class-aws.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testParameter",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret",
"<secret_value>",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: \"/provider\" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: \"/var/run/secrets-store-csi-providers\" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers",
"oc apply -f azure-provider.yaml",
"SERVICE_PRINCIPAL_CLIENT_SECRET=\"USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)\"",
"SERVICE_PRINCIPAL_CLIENT_ID=\"USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)\"",
"oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET}",
"oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: \"false\" useVMManagedIdentity: \"false\" userAssignedIdentityID: \"\" keyvaultName: \"kvname\" objects: | array: - | objectName: secret1 objectType: secret tenantId: \"tid\"",
"oc create -f secret-provider-class-azure.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-azure-provider\" 3 nodePublishSecretRef: name: secrets-store-creds 4",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"secret1",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1",
"my-secret-value",
"oc edit secretproviderclass my-azure-provider 1",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: \"test\" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: \"false\" keyvaultName: \"kvname\" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: \"tid\"",
"oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1",
"status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"oc create configmap <configmap_name> [options]",
"oc create configmap game-config --from-file=example-files/",
"oc describe configmaps game-config",
"Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config --from-file=example-files/",
"oc get configmaps game-config -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"oc get configmaps game-config-2 -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985",
"oc get configmaps game-config-3 -o yaml",
"apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985",
"oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm",
"oc get configmaps special-config -o yaml",
"apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"oc get priorityclasses",
"NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: \"This priority class should be used for XYZ service pods only.\" 5",
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1",
"oc create -f <file-name>.yaml",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.28.5",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc get pods -n openshift-run-once-duration-override-operator",
"NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s",
"oc label namespace <namespace> \\ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true",
"apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done",
"oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds",
"activeDeadlineSeconds: 3600",
"oc edit runoncedurationoverride cluster",
"apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1",
"oc delete crd scaledobjects.keda.k8s.io",
"oc delete crd triggerauthentications.keda.k8s.io",
"oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem",
"oc get all -n openshift-keda",
"NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10",
"oc project <project_name> 1",
"oc create serviceaccount thanos 1",
"apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token",
"oc create -f <file_name>.yaml",
"oc describe serviceaccount thanos 1",
"Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>",
"apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5",
"oc create -f <file-name>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7",
"apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3",
"apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD",
"oc create -f <filename>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2",
"oc apply -f <filename>",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"",
"get pod -n openshift-keda",
"NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s",
"oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1",
"oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.28\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda",
"oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda",
"sh-4.4USD cd /var/audit-policy/",
"sh-4.4USD ls",
"log-2023.02.17-14:50 policy.yaml",
"sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1",
"sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}",
"└── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication",
"oc create -f <filename>.yaml",
"oc get scaledobject <scaled_object_name>",
"NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s",
"kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: \"custom\" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: \"0.5\" pendingPodConditions: - \"Ready\" - \"PodScheduled\" - \"AnyOtherCustomPodCondition\" multipleScalersCalculation : \"max\" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"bearer\" authenticationRef: 14 name: prom-cluster-triggerauthentication",
"oc create -f <filename>.yaml",
"oc get scaledjob <scaled_job_name>",
"NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s",
"oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh",
"oc get clusterrole | grep keda.sh",
"oc delete clusterrole.keda.sh-v1alpha1-admin",
"oc get clusterrolebinding | grep keda.sh",
"oc delete clusterrolebinding.keda.sh-v1alpha1-admin",
"oc delete project openshift-keda",
"oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: mastersSchedulable: false profile: HighNodeUtilization 1 #",
"apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1-east spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s2-east spec: affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: team4a spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc label node node1 e2e-az-name=e2e-az1",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #",
"oc create -f <file-name>.yaml",
"oc label node node1 e2e-az-name=e2e-az3",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #",
"oc create -f <file-name>.yaml",
"oc label node node1 zone=us",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1",
"oc label node node1 zone=emea",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc describe pod pod-s1",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes node1 key1=value1:NoSchedule",
"oc adm taint nodes node1 key1=value1:NoExecute",
"oc adm taint nodes node1 key2=value2:NoSchedule",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - operator: \"Exists\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit machineset <machineset>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #",
"kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"<key_name>\"} 3 ]",
"oc apply -f project.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.28.5",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.28.5",
"oc label nodes <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>,<key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.28.5",
"Error from server (Forbidden): error when creating \"pod.yaml\": pods \"pod-4\" is forbidden: pod node label selector conflicts with its project node label selector",
"oc edit namespace <name>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"type=user-node,region=east\" 1 openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: \"2021-05-10T12:35:04Z\" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: \"145537\" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.28.5",
"oc label <resource> <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.28.5",
"apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 profiles: 5 - AffinityAndTaints - TopologyAndDuplicates 6 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1",
"apiVersion: v1 kind: ConfigMap metadata: name: \"secondary-scheduler-config\" 1 namespace: \"openshift-secondary-scheduler-operator\" 2 data: \"config.yaml\": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated",
"apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] schedulerName: secondary-scheduler 1",
"oc describe pod nginx -n default",
"Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp",
"kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr # spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #",
"oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #",
"oc adm new-project <name> --node-selector=\"\"",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #",
"oc create -f daemonset.yaml",
"oc get pods",
"hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m",
"oc describe pod/hello-daemonset-cx6md|grep Node",
"Node: openshift-node01.hostname.com/10.14.20.134",
"oc describe pod/hello-daemonset-e3md9|grep Node",
"Node: openshift-node02.hostname.com/10.14.20.137",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #",
"oc delete cronjob/<cron_job_name>",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #",
"oc create -f <file-name>.yaml",
"oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'",
"apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 timeZone: Etc/UTC 2 concurrencyPolicy: \"Replace\" 3 startingDeadlineSeconds: 200 4 suspend: true 5 successfulJobsHistoryLimit: 3 6 failedJobsHistoryLimit: 1 7 jobTemplate: 8 spec: template: metadata: labels: 9 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 10 #",
"oc create -f <file-name>.yaml",
"oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'",
"oc get nodes",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.28.5 node1.example.com Ready worker 7h v1.28.5 node2.example.com Ready worker 7h v1.28.5",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.28.5 node1.example.com NotReady,SchedulingDisabled worker 7h v1.28.5 node2.example.com Ready worker 7h v1.28.5",
"oc get nodes -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.28.5 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.28.5-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.28.5 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.28.5-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.28.5 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.28.5-30.rhaos4.10.gitf2f339d.el8-dev",
"oc get node <node>",
"oc get node node1.example.com",
"NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.28.5",
"oc describe node <node>",
"oc describe node node1.example.com",
"Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.28.5-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.28.5 Kube-Proxy Version: v1.28.5 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #",
"oc get pod --selector=<nodeSelector>",
"oc get pod --selector=kubernetes.io/os",
"oc get pod -l=<nodeSelector>",
"oc get pod -l kubernetes.io/os=linux",
"oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%",
"oc adm top node --selector=''",
"oc adm cordon <node1>",
"node/<node1> cordoned",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.28.5",
"oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]",
"oc adm drain <node1> <node2> --force=true",
"oc adm drain <node1> <node2> --grace-period=-1",
"oc adm drain <node1> <node2> --ignore-daemonsets=true",
"oc adm drain <node1> <node2> --timeout=5s",
"oc adm drain <node1> <node2> --delete-emptydir-data=true",
"oc adm drain <node1> <node2> --dry-run=true",
"oc adm uncordon <node1>",
"oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>",
"oc label nodes webconsole-7f7f6 unhealthy=true",
"kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #",
"oc label pods --all <key_1>=<value_1>",
"oc label pods --all status=unhealthy",
"oc adm cordon <node>",
"oc adm cordon node1.example.com",
"node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled",
"oc adm uncordon <node1>",
"oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>",
"oc get machinesets -n openshift-machine-api",
"oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api",
"oc edit machineset <machine-set-name> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get machineconfigpool --show-labels",
"NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False",
"oc label machineconfigpool worker custom-kubelet=enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #",
"oc create -f <file-name>",
"oc create -f master-kube-config.yaml",
"oc edit schedulers.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #",
"oc create -f 99-worker-setsebool.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.28.5 ip-10-0-136-243.ec2.internal Ready master 34m v1.28.5 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.28.5 ip-10-0-142-249.ec2.internal Ready master 34m v1.28.5 ip-10-0-153-11.ec2.internal Ready worker 28m v1.28.5 ip-10-0-153-150.ec2.internal Ready master 34m v1.28.5",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"oc label machineconfigpool worker kubelet-swap=enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #",
"#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #",
"oc create -f <file_name>.yaml",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False",
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #",
"oc adm cordon <node1>",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force",
"error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction",
"oc debug node/<node1>",
"chroot /host",
"systemctl reboot",
"ssh core@<master-node>.<cluster_name>.<base_domain>",
"sudo systemctl reboot",
"oc adm uncordon <node1>",
"ssh core@<target_node>",
"sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #",
"oc create -f <file_name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #",
"oc create -f <file_name>.yaml",
"oc debug node/<node_name>",
"chroot /host",
"SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #",
"oc create -f <file_name>.yaml",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #",
"oc create -f <file_name>.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #",
"oc create -f <filename>",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/kubernetes/kubelet.conf",
"\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1",
"USD(nproc) X 1/2 MiB",
"for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1",
"curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'",
"apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f myapp.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f myservice.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377",
"oc create -f mydb.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m",
"oc set volume <object_selection> <operation> <mandatory_parameters> <options>",
"oc set volume <object_type>/<name> [options]",
"oc set volume pod/p1",
"oc set volume dc --all --name=v1",
"oc set volume <object_type>/<name> --add [options]",
"oc set volume dc/registry --add",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP",
"oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'",
"oc set volume <object_type>/<name> --add --overwrite [options]",
"oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data",
"oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt",
"oc set volume <object_type>/<name> --remove [options]",
"oc set volume dc/d1 --remove --name=v1",
"oc set volume dc/d1 --remove --name=v1 --containers=c1",
"oc set volume rc/r1 --remove --confirm",
"oc rsh <pod>",
"sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3",
"apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data",
"echo -n \"admin\" | base64",
"YWRtaW4=",
"echo -n \"1f2d1e2e67df\" | base64",
"MWYyZDFlMmU2N2Rm",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=",
"oc create -f <secrets-filename>",
"oc create -f secret.yaml",
"secret \"mysecret\" created",
"oc get secret <secret-name>",
"oc get secret mysecret",
"NAME TYPE DATA AGE mysecret Opaque 2 17h",
"oc get secret <secret-name> -o yaml",
"oc get secret mysecret -o yaml",
"apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque",
"kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1",
"oc create -f <your_yaml_file>.yaml",
"oc create -f secret-pod.yaml",
"pod \"test-projected-volume\" created",
"oc get pod <name>",
"oc get pod test-projected-volume",
"NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s",
"oc exec -it <pod> <command>",
"oc exec -it test-projected-volume -- /bin/sh",
"/ # ls",
"bin home root tmp dev proc run usr etc projected-volume sys var",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never",
"oc create -f volume-pod.yaml",
"oc logs -p dapi-volume-test-pod",
"cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory",
"oc create -f pod.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory",
"oc create -f volume-pod.yaml",
"apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth",
"oc create -f secret.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue",
"oc create -f configmap.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"oc rsync <source> <destination> [-c <container>]",
"<pod name>:<dir>",
"oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>",
"oc rsync /home/user/source devpod1234:/src -c user-container",
"oc rsync devpod1234:/src /home/user/source",
"oc rsync devpod1234:/src/status.txt /home/user/",
"rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"export RSYNC_RSH='oc rsh'",
"rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]",
"oc exec mypod date",
"Thu Apr 9 02:21:53 UTC 2015",
"/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>",
"/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> 5000 6000",
"Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000",
"oc port-forward <pod> 8888:5000",
"Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000",
"oc port-forward <pod> :5000",
"Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000",
"oc port-forward <pod> 0:5000",
"/proxy/nodes/<node_name>/portForward/<namespace>/<pod>",
"/proxy/nodes/node123.openshift.com/portForward/myns/mypod",
"sudo sysctl -a",
"oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml",
"apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.15.0-0.nightly-2022-11-16-003434 creationTimestamp: \"2022-11-17T14:09:27Z\" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: \"2422\" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3",
"oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml",
"Please edit the object below. Lines beginning with a '#' will be ignored, and an empty file will abort the edit. If an error occurs while saving this file will be reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.rp_filter\": \"1\" } } ] }'",
"oc apply -f reverse-path-fwd-example.yaml",
"networkattachmentdefinition.k8.cni.cncf.io/tuningnad created",
"apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"oc apply -f examplepod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s",
"oc rsh example",
"sh-4.4# sysctl net.ipv4.conf.net1.rp_filter",
"net.ipv4.conf.net1.rp_filter = 1",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: [\"ALL\"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"1\" - name: net.ipv4.ip_local_port_range value: \"32770 60666\" - name: net.ipv4.tcp_syncookies value: \"0\" - name: net.ipv4.ping_group_range value: \"0 200000000\"",
"oc apply -f sysctl_pod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s",
"oc rsh sysctl-example",
"sh-4.4# sysctl kernel.shm_rmid_forced",
"kernel.shm_rmid_forced = 1",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"",
"oc apply -f sysctl-example-unsafe.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m",
"oc label machineconfigpool worker custom-kubelet=sysctl",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - \"kernel.msg*\" - \"net.core.somaxconn\"",
"oc apply -f set-sysctl-worker.yaml",
"oc get machineconfigpool worker -w",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"",
"oc apply -f sysctl-example-safe-unsafe.yaml",
"Warning: would violate PodSecurity \"restricted:latest\": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created",
"oc get pod",
"NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s",
"oc rsh sysctl-example-safe-unsafe",
"sh-4.4# sysctl net.core.somaxconn",
"net.core.somaxconn = 1024",
"oc exec -ti no-priv -- /bin/bash",
"cat >> Dockerfile <<EOF FROM registry.access.redhat.com/ubi9 EOF",
"podman build .",
"io.kubernetes.cri-o.Devices: \"/dev/fuse\"",
"apiVersion: v1 kind: Pod metadata: name: podman-pod annotations: io.kubernetes.cri-o.Devices: \"/dev/fuse\"",
"spec: containers: - name: podman-container image: quay.io/podman/stable args: - sleep - \"1000000\" securityContext: runAsUser: 1000",
"oc get events [-n <project>] 1",
"oc get events -n openshift-config",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"openshift-sdn\": cannot set \"openshift-sdn\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc create -f pod-spec.yaml",
"podman login registry.redhat.io",
"podman pull registry.redhat.io/openshift4/ose-cluster-capacity",
"podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]",
"oc create -f <file_name>.yaml",
"oc create sa cluster-capacity-sa",
"oc create sa cluster-capacity-sa -n default",
"oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc create -f pod.yaml",
"oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml",
"apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap",
"oc create -f cluster-capacity-job.yaml",
"oc logs jobs/cluster-capacity-job",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"",
"oc create -f <limit_range_file> -n <project> 1",
"oc get limits -n demoproject",
"NAME CREATED AT resource-limits 2020-07-15T17:14:23Z",
"oc describe limits resource-limits -n demoproject",
"Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -",
"oc delete limits <limit_name>",
"-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.",
"JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"",
"apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file-name>.yaml",
"oc rsh test",
"env | grep MEMORY | sort",
"MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184",
"oc rsh test",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 0",
"sed -e '' </dev/zero",
"Killed",
"echo USD?",
"137",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 1",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m",
"oc get pod test -o yaml",
"status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3",
"oc create -f <file_name>.yaml",
"sysctl -w vm.overcommit_memory=0",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v1\" 1",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s",
"oc describe mc <name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd_unified_cgroup_hierarchy=1 1 cgroup_no_v1=\"all\" 2 psi=0",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.28.5",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"stat -c %T -f /sys/fs/cgroup",
"cgroup2fs",
"tmpfs",
"compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit featuregate cluster",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1",
"oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5",
"- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"",
"tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule",
"kind: Node apiVersion: v1 metadata: labels: topology.kubernetes.io/region=east",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker 1 kubeletConfig: node-status-update-frequency: 2 - \"10s\" node-status-report-frequency: 3 - \"1m\"",
"tolerations: - key: \"node.kubernetes.io/unreachable\" operator: \"Exists\" effect: \"NoExecute\" 1 - key: \"node.kubernetes.io/not-ready\" operator: \"Exists\" effect: \"NoExecute\" 2 tolerationSeconds: 600 3",
"export OFFLINE_TOKEN=<copied_api_token>",
"export JWT_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )",
"curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq",
"{ \"release_tag\": \"v2.5.1\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:ac87f93\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156\" } }",
"export API_URL=<api_url> 1",
"export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')",
"export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDOPENSHIFT_CLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDopenshift_cluster_id, \"name\": \"<openshift_cluster_name>\" 2 }')",
"CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')",
"INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.download_url'",
"https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=USDVERSION",
"curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'",
"2294ba03-c264-4f11-ac08-2f1bb2f8c296",
"HOST_ID=<host_id> 1",
"curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r",
"{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }",
"curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{JWT_TOKEN}\"",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'",
"{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }",
"curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'",
"{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.28.5 compute-1.example.com Ready worker 11m v1.28.5",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)",
"curl -L USDISO_URL -o rhcos-live.iso",
"nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000",
"nmcli con up <network_interface>",
"{ \"ignition\":{ \"version\":\"3.2.0\", \"config\":{ \"merge\":[ { \"source\":\"<hosted_worker_ign_file>\" 1 } ] } }, \"storage\":{ \"files\":[ { \"path\":\"/etc/hostname\", \"contents\":{ \"source\":\"data:,<new_fqdn>\" 2 }, \"mode\":420, \"overwrite\":true, \"path\":\"/etc/hostname\" } ] } }",
"sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition",
"coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk>",
"apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"eth0\" macAddress: \"AA:BB:CC:DD:EE:11\"",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.28.5 compute-1.example.com Ready worker 11m v1.28.5",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"topk(3, sum(increase(container_runtime_crio_containers_oom_count_total[1d])) by (name))",
"rate(container_runtime_crio_image_pulls_failure_total[1h]) / (rate(container_runtime_crio_image_pulls_success_total[1h]) + rate(container_runtime_crio_image_pulls_failure_total[1h]))",
"sum by (node) (container_memory_rss{id=\"/system.slice\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 80",
"sum by (node) (container_memory_rss{id=\"/system.slice/kubelet.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50",
"sum by (node) (container_memory_rss{id=\"/system.slice/crio.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50",
"sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 80",
"sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/kubelet.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50",
"sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/crio.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/nodes/index |
Chapter 4. Creating and building an application using the CLI | Chapter 4. Creating and building an application using the CLI 4.1. Before you begin Review About the OpenShift CLI . You must be able to access a running instance of OpenShift Container Platform. If you do not have access, contact your cluster administrator. You must have the OpenShift CLI ( oc ) downloaded and installed . 4.2. Logging in to the CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). Procedure Log into OpenShift Container Platform from the CLI using your username and password or with an OAuth token: With username and password: USD oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify With an OAuth token: USD oc login <https://api.your-openshift-server.com> --token=<tokenID> You can now create a project or issue other commands for managing your cluster. Additional resources oc login oc logout 4.3. Creating a new project A project enables a community of users to organize and manage their content in isolation. Projects are OpenShift Container Platform extensions to Kubernetes namespaces. Projects have additional features that enable user self-provisioning. Users must receive access to projects from administrators. Cluster administrators can allow developers to create their own projects. In most cases, users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). Procedure To create a new project, enter the following command: USD oc new-project user-getting-started --display-name="Getting Started with OpenShift" Example output Now using project "user-getting-started" on server "https://openshift.example.com:6443". Additional resources oc new-project 4.4. Granting view permissions OpenShift Container Platform automatically creates a few special service accounts in every project. The default service account takes responsibility for running the pods. OpenShift Container Platform uses and injects this service account into every pod that launches. The following procedure creates a RoleBinding object for the default ServiceAccount object. The service account communicates with the OpenShift Container Platform API to learn about pods, services, and resources within the project. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. You must have cluster-admin or project-admin privileges. Procedure To add the view role to the default service account in the user-getting-started project , enter the following command: USD oc adm policy add-role-to-user view -z default -n user-getting-started Additional resources Understanding authentication RBAC overview oc policy add-role-to-user 4.5. Deploying your first image The simplest way to deploy an application in OpenShift Container Platform is to run an existing container image. The following procedure deploys a front-end component of an application called national-parks-app . The web application displays an interactive map. The map displays the location of major national parks across the world. Prerequisites You must have access to an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Procedure To deploy an application, enter the following command: USD oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app' Example output --> Found container image 0c2f55f (12 months old) from quay.io for "quay.io/openshiftroadshow/parksmap:latest" * An image stream tag will be created as "parksmap:latest" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend ... imagestream.image.openshift.io "parksmap" created deployment.apps "parksmap" created service "parksmap" created --> Success Additional resources oc new-app 4.5.1. Creating a route External clients can access applications running on OpenShift Container Platform through the routing layer and the data object behind that is a route . The default OpenShift Container Platform router (HAProxy) uses the HTTP header of the incoming request to determine where to proxy the connection. Optionally, you can define security, such as TLS, for the route. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. You must have cluster-admin or project-admin privileges. Procedure To retrieve the created application service, enter the following command: USD oc get service Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s To create a route, enter the following command: USD oc create route edge parksmap --service=parksmap Example output route.route.openshift.io/parksmap created To retrieve the created application route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Additional resources oc create route edge oc get 4.5.2. Examining the pod OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance, physical or virtual, to a container. You can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To list all pods with node names, enter the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s To list all pod details, enter the following command: USD oc describe pods Example output Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.14" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.14" ], "default": true, "dns": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image "quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b" Normal Pulled 35s kubelet Successfully pulled image "quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap Additional resources oc describe oc get oc label Viewing pods Viewing pod logs 4.5.3. Scaling the application In Kubernetes, a Deployment object defines how an application deploys. In most cases, users use Pod , Service , ReplicaSets , and Deployment resources together. In most cases, OpenShift Container Platform creates the resources for you. When you deploy the national-parks-app image, a deployment resource is created. In this example, only one Pod is deployed. The following procedure scales the national-parks-image to use two instances. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To scale your application from one pod instance to two pod instances, enter the following command: USD oc scale --current-replicas=1 --replicas=2 deployment/parksmap Example output deployment.apps/parksmap scaled Verification To ensure that your application scaled properly, enter the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s To scale your application back down to one pod instance, enter the following command: USD oc scale --current-replicas=2 --replicas=1 deployment/parksmap Additional resources oc scale 4.6. Deploying a Python application The following procedure deploys a back-end service for the parksmap application. The Python application performs 2D geo-spatial queries against a MongoDB database to locate and return map coordinates of all national parks in the world. The deployed back-end service is nationalparks . Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To create a new Python application, enter the following command: USD oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true Example output --> Found image 0406f6c (13 days old) in image stream "openshift/python" under tag "3.9-ubi8" for "python" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag "nationalparks:latest" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend ... imagestream.image.openshift.io "nationalparks" created buildconfig.build.openshift.io "nationalparks" created deployment.apps "nationalparks" created service "nationalparks" created --> Success To create a route to expose your application, nationalparks , enter the following command: USD oc create route edge nationalparks --service=nationalparks Example output route.route.openshift.io/parksmap created To retrieve the created application route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Additional resources oc new-app 4.7. Connecting to a database Deploy and connect a MongoDB database where the national-parks-app application stores location information. Once you mark the national-parks-app application as a backend for the map visualization tool, parksmap deployment uses the OpenShift Container Platform discover mechanism to display the map automatically. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To connect to a database, enter the following command: USD oc new-app quay.io/centos7/mongodb-36-centos7 --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb' Example output --> Found container image dc18f52 (8 months old) from quay.io for "quay.io/centos7/mongodb-36-centos7" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as "mongodb-nationalparks:latest" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app ... imagestream.image.openshift.io "mongodb-nationalparks" created deployment.apps "mongodb-nationalparks" created service "mongodb-nationalparks" created --> Success Additional resources oc new-project 4.7.1. Creating a secret The Secret object provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. The following procedure adds the secret nationalparks-mongodb-parameters and mounts it to the nationalparks workload. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To create a secret, enter the following command: USD oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb Example output secret/nationalparks-mongodb-parameters created To update the environment variable to attach the mongodb secret to the nationalpartks workload, enter the following command: USD oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks Example output deployment.apps/nationalparks updated To show the status of the nationalparks deployment, enter the following command: USD oc rollout status deployment nationalparks Example output deployment "nationalparks" successfully rolled out To show the status of the mongodb-nationalparks deployment, enter the following command: USD oc rollout status deployment mongodb-nationalparks Example output deployment "nationalparks" successfully rolled out deployment "mongodb-nationalparks" successfully rolled out Additional resources oc create secret generic oc set env oc rollout status 4.7.2. Loading data and displaying the national parks map You deployed the parksmap and nationalparks applications and then deployed the mongodb-nationalparks database. However, no data has been loaded into the database. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To load national parks data, enter the following command: USD oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load Example output "Items inserted in database: 2893" To verify that your data is loaded properly, enter the following command: USD oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all Example output (trimmed) , {"id": "Great Zimbabwe", "latitude": "-20.2674635", "longitude": "30.9337986", "name": "Great Zimbabwe"}] To add labels to the route, enter the following command: USD oc label route nationalparks type=parksmap-backend Example output route.route.openshift.io/nationalparks labeled To retrieve your routes to view your map, enter the following command: USD oc get routes Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Copy and paste the HOST/PORT path you retrieved above into your web browser. Your browser should display a map of the national parks across the world. Figure 4.1. National parks across the world Additional resources oc exec oc label oc get | [
"oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify",
"oc login <https://api.your-openshift-server.com> --token=<tokenID>",
"oc new-project user-getting-started --display-name=\"Getting Started with OpenShift\"",
"Now using project \"user-getting-started\" on server \"https://openshift.example.com:6443\".",
"oc adm policy add-role-to-user view -z default -n user-getting-started",
"oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app'",
"--> Found container image 0c2f55f (12 months old) from quay.io for \"quay.io/openshiftroadshow/parksmap:latest\" * An image stream tag will be created as \"parksmap:latest\" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend imagestream.image.openshift.io \"parksmap\" created deployment.apps \"parksmap\" created service \"parksmap\" created --> Success",
"oc get service",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s",
"oc create route edge parksmap --service=parksmap",
"route.route.openshift.io/parksmap created",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None",
"oc get pods",
"NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s",
"oc describe pods",
"Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/networks-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" Normal Pulled 35s kubelet Successfully pulled image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap",
"oc scale --current-replicas=1 --replicas=2 deployment/parksmap",
"deployment.apps/parksmap scaled",
"oc get pods",
"NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s",
"oc scale --current-replicas=2 --replicas=1 deployment/parksmap",
"oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true",
"--> Found image 0406f6c (13 days old) in image stream \"openshift/python\" under tag \"3.9-ubi8\" for \"python\" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag \"nationalparks:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend imagestream.image.openshift.io \"nationalparks\" created buildconfig.build.openshift.io \"nationalparks\" created deployment.apps \"nationalparks\" created service \"nationalparks\" created --> Success",
"oc create route edge nationalparks --service=nationalparks",
"route.route.openshift.io/parksmap created",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None",
"oc new-app quay.io/centos7/mongodb-36-centos7 --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb'",
"--> Found container image dc18f52 (8 months old) from quay.io for \"quay.io/centos7/mongodb-36-centos7\" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as \"mongodb-nationalparks:latest\" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app imagestream.image.openshift.io \"mongodb-nationalparks\" created deployment.apps \"mongodb-nationalparks\" created service \"mongodb-nationalparks\" created --> Success",
"oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb",
"secret/nationalparks-mongodb-parameters created",
"oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks",
"deployment.apps/nationalparks updated",
"oc rollout status deployment nationalparks",
"deployment \"nationalparks\" successfully rolled out",
"oc rollout status deployment mongodb-nationalparks",
"deployment \"nationalparks\" successfully rolled out deployment \"mongodb-nationalparks\" successfully rolled out",
"oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load",
"\"Items inserted in database: 2893\"",
"oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all",
", {\"id\": \"Great Zimbabwe\", \"latitude\": \"-20.2674635\", \"longitude\": \"30.9337986\", \"name\": \"Great Zimbabwe\"}]",
"oc label route nationalparks type=parksmap-backend",
"route.route.openshift.io/nationalparks labeled",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/getting_started/openshift-cli |
Chapter 1. Selecting OpenShift AI administrator and user groups | Chapter 1. Selecting OpenShift AI administrator and user groups By default, all users authenticated in OpenShift can access OpenShift AI. Also by default, users with cluster-admin permissions are OpenShift AI administrators. A cluster admin is a superuser that can perform any action in any project in the OpenShift cluster. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. After a cluster admin user defines additional administrator and user groups in OpenShift, you can add those groups to OpenShift AI by selecting them in the OpenShift AI dashboard. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. The groups that you want to select as administrator and user groups for OpenShift AI already exist in OpenShift. For more information, see Managing users and groups . Procedure From the OpenShift AI dashboard, click Settings User management . Select your OpenShift AI administrator groups: Under Data science administrator groups , click the text box and select an OpenShift group. Repeat this process to define multiple administrator groups. Select your OpenShift AI user groups: Under Data science user groups , click the text box and select an OpenShift group. Repeat this process to define multiple user groups. Important The system:authenticated setting allows all users authenticated in OpenShift to access OpenShift AI. Click Save changes . Verification Administrator users can successfully log in to OpenShift AI and have access to the Settings navigation menu. Non-administrator users can successfully log in to OpenShift AI. They can also access and use individual components, such as projects and workbenches. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_resources/selecting-admin-and-user-groups_resource-mgmt |
Chapter 4. Enabling user-managed encryption for Azure | Chapter 4. Enabling user-managed encryption for Azure In OpenShift Container Platform version 4.13, you can install a cluster with a user-managed encryption key in Azure. To enable this feature, you can prepare an Azure DiskEncryptionSet before installation, modify the install-config.yaml file, and then complete the installation. 4.1. Preparing an Azure Disk Encryption Set The OpenShift Container Platform installer can use an existing Disk Encryption Set with a user-managed key. To enable this feature, you can create a Disk Encryption Set in Azure and provide the key to the installer. Procedure Set the following environment variables for the Azure resource group by running the following command: USD export RESOURCEGROUP="<resource_group>" \ 1 LOCATION="<location>" 2 1 Specifies the name of the Azure resource group where you will create the Disk Encryption Set and encryption key. To avoid losing access to your keys after destroying the cluster, you should create the Disk Encryption Set in a different resource group than the resource group where you install the cluster. 2 Specifies the Azure location where you will create the resource group. Set the following environment variables for the Azure Key Vault and Disk Encryption Set by running the following command: USD export KEYVAULT_NAME="<keyvault_name>" \ 1 KEYVAULT_KEY_NAME="<keyvault_key_name>" \ 2 DISK_ENCRYPTION_SET_NAME="<disk_encryption_set_name>" 3 1 Specifies the name of the Azure Key Vault you will create. 2 Specifies the name of the encryption key you will create. 3 Specifies the name of the disk encryption set you will create. Set the environment variable for the ID of your Azure Service Principal by running the following command: USD export CLUSTER_SP_ID="<service_principal_id>" 1 1 Specifies the ID of the service principal you will use for this installation. Enable host-level encryption in Azure by running the following commands: USD az feature register --namespace "Microsoft.Compute" --name "EncryptionAtHost" USD az feature show --namespace Microsoft.Compute --name EncryptionAtHost USD az provider register -n Microsoft.Compute Create an Azure Resource Group to hold the disk encryption set and associated resources by running the following command: USD az group create --name USDRESOURCEGROUP --location USDLOCATION Create an Azure key vault by running the following command: USD az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION \ --enable-purge-protection true Create an encryption key in the key vault by running the following command: USD az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME \ --protection software Capture the ID of the key vault by running the following command: USD KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query "[id]" -o tsv) Capture the key URL in the key vault by running the following command: USD KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name \ USDKEYVAULT_KEY_NAME --query "[key.kid]" -o tsv) Create a disk encryption set by running the following command: USD az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g \ USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL Grant the DiskEncryptionSet resource access to the key vault by running the following commands: USD DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[identity.principalId]" -o tsv) USD az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id \ USDDES_IDENTITY --key-permissions wrapkey unwrapkey get Grant the Azure Service Principal permission to read the DiskEncryptionSet by running the following commands: USD DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[id]" -o tsv) USD az role assignment create --assignee USDCLUSTER_SP_ID --role "<reader_role>" \ 1 --scope USDDES_RESOURCE_ID -o jsonc 1 Specifies an Azure role with read permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 4.2. steps Install an OpenShift Container Platform cluster: Install a cluster with customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure Install a cluster into an existing VNet on installer-provisioned infrastructure Install a private cluster on installer-provisioned infrastructure Install a cluster into an government region on installer-provisioned infrastructure | [
"export RESOURCEGROUP=\"<resource_group>\" \\ 1 LOCATION=\"<location>\" 2",
"export KEYVAULT_NAME=\"<keyvault_name>\" \\ 1 KEYVAULT_KEY_NAME=\"<keyvault_key_name>\" \\ 2 DISK_ENCRYPTION_SET_NAME=\"<disk_encryption_set_name>\" 3",
"export CLUSTER_SP_ID=\"<service_principal_id>\" 1",
"az feature register --namespace \"Microsoft.Compute\" --name \"EncryptionAtHost\"",
"az feature show --namespace Microsoft.Compute --name EncryptionAtHost",
"az provider register -n Microsoft.Compute",
"az group create --name USDRESOURCEGROUP --location USDLOCATION",
"az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION --enable-purge-protection true",
"az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME --protection software",
"KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query \"[id]\" -o tsv)",
"KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name USDKEYVAULT_KEY_NAME --query \"[key.kid]\" -o tsv)",
"az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL",
"DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[identity.principalId]\" -o tsv)",
"az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id USDDES_IDENTITY --key-permissions wrapkey unwrapkey get",
"DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[id]\" -o tsv)",
"az role assignment create --assignee USDCLUSTER_SP_ID --role \"<reader_role>\" \\ 1 --scope USDDES_RESOURCE_ID -o jsonc"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure/enabling-user-managed-encryption-azure |
Chapter 1. Features | Chapter 1. Features AMQ Streams version 1.8 is based on Strimzi 0.24.x. The features added in this release, and that were not in releases of AMQ Streams, are outlined below. Note To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project . 1.1. OpenShift Container Platform support AMQ Streams 1.8 is supported on OpenShift Container Platform 4.6 and 4.8. For more information about the supported platform versions, see the Red Hat Knowledgebase article Red Hat AMQ 7 Supported Configurations . 1.2. Kafka 2.8.0 support AMQ Streams now supports Apache Kafka version 2.8.0. AMQ Streams uses Kafka 2.8.0. Only Kafka distributions built by Red Hat are supported. You must upgrade the Cluster Operator to AMQ Streams version 1.8 before you can upgrade brokers and client applications to Kafka 2.8.0. For upgrade instructions, see Upgrading AMQ Streams . Refer to the Kafka 2.7.0 and Kafka 2.8.0 Release Notes for additional information. Note Kafka 2.7.x is supported only for the purpose of upgrading to AMQ Streams 1.8. For more information on supported versions, see the Red Hat AMQ 7 Component Details Page on the Customer Portal. Kafka version 2.8.0 requires ZooKeeper version 3.5.9. Therefore, the Cluster Operator performs a ZooKeeper upgrade when you upgrade from AMQ Streams 1.7 to AMQ Streams 1.8. 1.3. Feature gates to toggle features on and off As a Kafka cluster administrator, you can now toggle a subset of features on and off using feature gates in the operator's deployment configuration. Feature gates are currently available only for the Cluster Operator; future releases might add feature gates to other operators. AMQ Streams 1.8 introduces the following feature gates and associated new features: ControlPlaneListener to toggle control plane listeners ServiceAccountPatching to toggle service account patching Feature gates have a default state of enabled or disabled . When enabled, a feature gate changes the behavior of the operator and enables the feature in your AMQ Streams deployment. Feature gates have a maturity level of Alpha , Beta , or Generally Available (GA). Table 1.1. Maturity levels of feature gates Feature gate maturity level Description Default state Alpha Controls features that might be experimental, unstable, or not sufficiently tested for production use. These features are subject to change in future releases. Disabled Beta Controls features that are well tested. These features are not likely to change in future releases. Enabled General Availability (GA) Controls features that are stable, well tested, and suitable for production use. GA features will not change in future releases. Enabled Configuring feature gates In the Cluster Operator's deployment configuration, in the STRIMZI_FEATURE_GATES environment variable, specify a comma-separated list of feature gate names and prefixes. A + prefix enables the feature gate and a - prefix disables it. Example: Enabling the Control Plane Listener feature gate Edit the Deployment for the Cluster Operator: oc edit deployment strimzi-cluster-operator Add the STRIMZI_FEATURE_GATES environment variable with a value of +ControlPlaneListener # ... env: #... - name: STRIMZI_FEATURE_GATES value: +ControlPlaneListener #... See Feature gates and Cluster Operator configuration . 1.4. Control plane listeners Note This feature is controlled using the ControlPlaneListener feature gate, which is in alpha stage and disabled by default. For more information, see Feature gates . In a standard AMQ Streams cluster, control plane traffic and data plane traffic both use the same inter-broker listener on port 9091. With this release, you can configure your cluster so that control plane traffic uses a dedicated control plane listener on port 9090. Data plane traffic continues to use the listener on port 9091. Using control plane listeners might improve performance because important controller connections, such as partition leadership changes, are not delayed by data replication across brokers. The majority of data plane traffic consists of this data replication. See Control plane listener feature gate . 1.5. Service account patching Note This feature is controlled using the ServiceAccountPatching feature gate, which is in alpha stage and disabled by default. For more information, see Feature gates . By default, the Cluster Operator does not update service accounts. With this release, you can enable updates to service accounts to be applied in every reconciliation. For example, the Cluster Operator can apply custom labels or annotations to the service account. Custom labels and annotations are configured for custom resources using the template.serviceAccount property. Example custom labels and annotations # ... template: serviceAccount: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... See Service Account patching feature gate . 1.6. Debezium for change data capture integration Red Hat Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred. Debezium has multiple uses, including: Data replication Updating caches and search indexes Simplifying monolithic applications Data integration Enabling streaming queries Debezium provides connectors (based on Kafka Connect) for the following common databases: Db2 MongoDB MySQL PostgreSQL SQL Server For more information on deploying Debezium with AMQ Streams, refer to the product documentation . 1.7. Service Registry You can use Service Registry as a centralized store of service schemas for data streaming. For Kafka, you can use Service Registry to store Apache Avro or JSON schema. Service Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints. Using Service Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code. For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas. Kafka client applications can push or pull their schemas from Service Registry at runtime. For more information on using Service Registry with AMQ Streams, refer to the Service Registry documentation . | [
"edit deployment strimzi-cluster-operator",
"env: # - name: STRIMZI_FEATURE_GATES value: +ControlPlaneListener #",
"template: serviceAccount: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_openshift/features-str |
20.3.4.5. Configuring ssh-agent | 20.3.4.5. Configuring ssh-agent The ssh-agent can be used to store your passphrase so that you do not have to enter it each time you make a ssh or scp connection. If you are not running the X Window System, follow these steps from a shell prompt. If you are running GNOME but you do not want to configure it to prompt you for your passphrase when you log in (refer to Section 20.3.4.4, "Configuring ssh-agent with GNOME" ), this procedure will work in a terminal window, such as an XTerm. If you are running X but not GNOME, this procedure will work in a terminal window. However, your passphrase will only be remembered for that terminal window; it is not a global setting. At a shell prompt, type the following command: Then type the command: and enter your passphrase(s). If you have more than one key pair configured, you will be prompted for each one. When you log out, your passphrase(s) will be forgotten. You must execute these two commands each time you log in to a virtual console or open a terminal window. | [
"exec /usr/bin/ssh-agent USDSHELL",
"ssh-add"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Generating_Key_Pairs-Configuring_ssh_agent |
Chapter 2. FIPS support | Chapter 2. FIPS support Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. To use FIPS with AMQ Streams, you must have a FIPS-compliant OpenJDK (Open Java Development Kit) installed on your system. If your RHEL system is FIPS-enabled, OpenJDK automatically switches to FIPS mode when running AMQ Streams. This ensures that AMQ Streams uses the FIPS-compliant security libraries provided by OpenJDK. Minimum password length When running in the FIPS mode, SCRAM-SHA-512 passwords need to be at least 32 characters long. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration. If you have any users with passwords shorter than 32 characters, you need to regenerate a password with the required length. Additional resources What are Federal Information Processing Standards (FIPS) 2.1. Installing AMQ Streams with FIPS mode enabled Enable FIPS mode before you install AMQ Streams on RHEL. Red Hat recommends installing RHEL with FIPS mode enabled, as opposed to enabling FIPS mode later. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. With RHEL running in FIPS mode, you must ensure that the AMQ Streams configuration is FIPS-compliant. Additionally, your Java implementation must also be FIPS-compliant. Note Running AMQ Streams on RHEL in FIPS mode requires a FIPS-compliant JDK. We recommend using OpenJDK 17.0.6 or newer. Procedure Install RHEL in FIPS mode. For further information, see the information on security hardening in the RHEL documentation . Proceed with the installation of AMQ Streams. Configure AMQ Streams to use FIPS-compliant algorithms and protocols. If used, ensure that the following configuration is compliant: SSL cipher suites and TLS versions must be supported by the JDK framework. SCRAM-SHA-512 passwords must be at least 32 characters long. Important Make sure that your installation environment and AMQ Streams configuration remains compliant as FIPS requirements change. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/assembly-fips-support-str |
21.8. virt-resize: Resizing Guest Virtual Machines Offline | 21.8. virt-resize: Resizing Guest Virtual Machines Offline This section provides information about resizing offline guest virtual machines. 21.8.1. Introduction This section describes virt-resize , a tool for expanding or shrinking guest virtual machines. It only works for guest virtual machines that are offline (shut down). It works by copying the guest virtual machine image and leaving the original disk image untouched. This is ideal because you can use the original image as a backup, however there is a trade-off as you need twice the amount of disk space. 21.8.2. Expanding a Disk Image This section demonstrates a simple case of expanding a disk image: Locate the disk image to be resized. You can use the command virsh dumpxml GuestName for a libvirt guest virtual machine. Decide on how you wish to expand the guest virtual machine. Run virt-df -h and virt-filesystems on the guest virtual machine disk, as shown in the following output: The following example demonstrates how to: Increase the size of the first (boot) partition, from approximately 100MB to 500MB. Increase the total disk size from 8GB to 16GB. Expand the second partition to fill the remaining space. Expand /dev/VolGroup00/LogVol00 to fill the new space in the second partition. Make sure the guest virtual machine is shut down. Rename the original disk as the backup. How you do this depends on the host physical machine storage environment for the original disk. If it is stored as a file, use the mv command. For logical volumes (as demonstrated in this example), use lvrename : Create the new disk. The requirements in this example are to expand the total disk size up to 16GB. Since logical volumes are used here, the following command is used: The requirements from step 2 are expressed by this command: The first two arguments are the input disk and output disk. --resize /dev/sda1=500M resizes the first partition up to 500MB. --expand /dev/sda2 expands the second partition to fill all remaining space. --LV-expand /dev/VolGroup00/LogVol00 expands the guest virtual machine logical volume to fill the extra space in the second partition. virt-resize describes what it is doing in the output: Try to boot the virtual machine. If it works (and after testing it thoroughly) you can delete the backup disk. If it fails, shut down the virtual machine, delete the new disk, and rename the backup disk back to its original name. Use virt-df or virt-filesystems to show the new size: Note that resizing guest virtual machines in some cases may become problematic. If virt-resize fails, there are a number of tips that you can review and attempt in the virt-resize(1) man page. For some older Red Hat Enterprise Linux guest virtual machines, you may need to pay particular attention to the tip regarding GRUB. | [
"virt-df -h -a /dev/vg_guests/RHEL6 Filesystem Size Used Available Use% RHEL6:/dev/sda1 98.7M 10.0M 83.6M 11% RHEL6:/dev/VolGroup00/LogVol00 6.8G 2.2G 4.3G 32% virt-filesystems -a disk.img --all --long -h /dev/sda1 ext3 101.9M /dev/sda2 pv 7.9G",
"lvrename /dev/vg_guests/RHEL6 /dev/vg_guests/RHEL6.backup",
"lvcreate -L 16G -n RHEL6 /dev/vg_guests Logical volume \"RHEL6\" created",
"virt-resize /dev/vg_guests/RHEL6.backup /dev/vg_guests/RHEL6 --resize /dev/sda1=500M --expand /dev/sda2 --LV-expand /dev/VolGroup00/LogVol00",
"Summary of changes: /dev/sda1: partition will be resized from 101.9M to 500.0M /dev/sda1: content will be expanded using the 'resize2fs' method /dev/sda2: partition will be resized from 7.9G to 15.5G /dev/sda2: content will be expanded using the 'pvresize' method /dev/VolGroup00/LogVol00: LV will be expanded to maximum size /dev/VolGroup00/LogVol00: content will be expanded using the 'resize2fs' method Copying /dev/sda1 [#####################################################] Copying /dev/sda2 [#####################################################] Expanding /dev/sda1 using the 'resize2fs' method Expanding /dev/sda2 using the 'pvresize' method Expanding /dev/VolGroup00/LogVol00 using the 'resize2fs' method",
"virt-df -h -a /dev/vg_pin/RHEL6 Filesystem Size Used Available Use% RHEL6:/dev/sda1 484.4M 10.8M 448.6M 3% RHEL6:/dev/VolGroup00/LogVol00 14.3G 2.2G 11.4G 16%"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Guest_virtual_machine_disk_access_with_offline_tools-virt_resize_resizing_guest_virtual_machines_offline |
Chapter 10. Troubleshooting builds | Chapter 10. Troubleshooting builds Use the following to troubleshoot build issues. 10.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 10.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num . To clear the annotations, enter the following commands: USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing an annotation has a - after the annotation name to be removed. | [
"requested access to the resource is denied",
"oc describe quota",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/builds_using_buildconfig/troubleshooting-builds_running-entitled-builds |
5.6. Tokens for Storing Certificate System Subsystem Keys and Certificates | 5.6. Tokens for Storing Certificate System Subsystem Keys and Certificates A token is a hardware or software device that performs cryptographic functions and stores public-key certificates, cryptographic keys, and other data. The Certificate System defines two types of tokens, internal and external , for storing key pairs and certificates that belong to the Certificate System subsystems. An internal (software) token is a pair of files, usually called the certificate database ( cert9.db ) and key database ( key4.db ), that the Certificate System uses to generate and store its key pairs and certificates. The Certificate System automatically generates these files in the filesystem of its host machine when first using the internal token. These files are created during the Certificate System subsystem configuration if the internal token was selected for key-pair generation. These security databases are located in the /var/lib/pki/ instance_name /alias directory. An external token refers to an external hardware device, such as a smart card or hardware security module (HSM), that the Certificate System uses to generate and store its key pairs and certificates. The Certificate System supports any hardware tokens that are compliant with PKCS #11. PKCS #11 is a standard set of APIs and shared libraries which isolate an application from the details of the cryptographic device. This enables the application to provide a unified interface for PKCS #11-compliant cryptographic devices. The PKCS #11 module implemented in the Certificate System supports cryptographic devices supplied by many different manufacturers. This module allows the Certificate System to plug in shared libraries supplied by manufacturers of external encryption devices and use them for generating and storing keys and certificates for the Certificate System managers. Consider using external tokens for generating and storing the key pairs and certificates used by Certificate System. These devices are another security measure to safeguard private keys because hardware tokens are sometimes considered more secure than software tokens. Before using external tokens, plan how the external token is going to be used with the subsystem: All system keys for a subsystem must be generated on the same token. The subsystem must be installed in an empty HSM slot. If the HSM slot has previously been used to store other keys, then use the HSM vendor's utilities to delete the contents of the slot. The Certificate System has to be able to create certificates and keys on the slot with default nicknames. If not properly cleaned up, the names of these objects may collide with instances. The Certificate System can also use hardware cryptographic accelerators with external tokens. Many of the accelerators provide the following security features: Fast SSL/TLS connections. Speed is important to accommodate a high number of simultaneous enrollment or service requests. Hardware protection of private keys. These devices behave like smart cards by not allowing private keys to be copied or removed from the hardware token. This is important as a precaution against key theft from an active attack of an online Certificate Manager. The Certificate System supports the nCipher nShield Connect XC hardware security module (HSM), by default. Certificate System-supported HSMs are automatically added to the pkcs11.txt database with modutil during the pre-configuration stage of the installation, if the PKCS #11 library modules are in the default installation paths. During configuration, the Security Modules panel displays the supported modules, along with the NSS internal software PKCS #11 module. All supported modules that are detected show a status of Found and is individually marked as either Logged in or Not logged in . If a token is found but not logged in, it is possible to log in using the Login under Operations . If the administrator can log into a token successfully, the password is stored in a configuration file. At the start or restart of the Certificate System instance, the passwords in the password store are used to attempt a login for each corresponding token. Administrators are allowed to select any of the tokens that are logged in as the default token, which is used to generate system keys. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/managing_tokens |
Chapter 1. Preparing to install with the Agent-based Installer | Chapter 1. Preparing to install with the Agent-based Installer 1.1. About the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster, with an available release image. The configuration is in the same format as for the installer-provisioned infrastructure and user-provisioned infrastructure installation methods. The Agent-based Installer can also optionally generate or accept Zero Touch Provisioning (ZTP) custom resources. ZTP allows you to provision new edge sites with declarative configurations of bare-metal equipment. Table 1.1. Agent-based Installer supported architectures CPU architecture Connected installation Disconnected installation Comments 64-bit x86 [✓] [✓] 64-bit ARM [✓] [✓] ppc64le [✓] [✓] s390x [✓] [✓] ISO boot is not supported. Instead, use PXE assets. 1.2. Understanding Agent-based Installer As an OpenShift Container Platform user, you can leverage the advantages of the Assisted Installer hosted service in disconnected environments. The Agent-based installation comprises a bootable ISO that contains the Assisted discovery agent and the Assisted Service. Both are required to perform the cluster installation, but the latter runs on only one of the hosts. Note Currently, ISO boot is not supported on IBM Z(R) ( s390x ) architecture. The recommended method is by using PXE assets, which requires specifying additional kernel arguments. The openshift-install agent create image subcommand generates an ephemeral ISO based on the inputs that you provide. You can choose to provide inputs through the following manifests: Preferred: install-config.yaml agent-config.yaml or Optional: ZTP manifests cluster-manifests/cluster-deployment.yaml cluster-manifests/agent-cluster-install.yaml cluster-manifests/pull-secret.yaml cluster-manifests/infraenv.yaml cluster-manifests/cluster-image-set.yaml cluster-manifests/nmstateconfig.yaml mirror/registries.conf mirror/ca-bundle.crt 1.2.1. Agent-based Installer workflow One of the control plane hosts runs the Assisted Service at the start of the boot process and eventually becomes the bootstrap host. This node is called the rendezvous host (node 0). The Assisted Service ensures that all the hosts meet the requirements and triggers an OpenShift Container Platform cluster deployment. All the nodes have the Red Hat Enterprise Linux CoreOS (RHCOS) image written to the disk. The non-bootstrap nodes reboot and initiate a cluster deployment. Once the nodes are rebooted, the rendezvous host reboots and joins the cluster. The bootstrapping is complete and the cluster is deployed. Figure 1.1. Node installation workflow You can install a disconnected OpenShift Container Platform cluster through the openshift-install agent create image subcommand for the following topologies: A single-node OpenShift Container Platform cluster (SNO) : A node that is both a master and worker. A three-node OpenShift Container Platform cluster : A compact cluster that has three master nodes that are also worker nodes. Highly available OpenShift Container Platform cluster (HA) : Three master nodes with any number of worker nodes. 1.2.2. Recommended resources for topologies Recommended cluster resources for the following topologies: Table 1.2. Recommended cluster resources Topology Number of control plane nodes Number of compute nodes vCPU Memory Storage Single-node cluster 1 0 8 vCPUs 16 GB of RAM 120 GB Compact cluster 3 0 or 1 8 vCPUs 16 GB of RAM 120 GB HA cluster 3 2 and above 8 vCPUs 16 GB of RAM 120 GB In the install-config.yaml , specify the platform on which to perform the installation. The following platforms are supported: baremetal vsphere external none Important For platform none : The none option requires the provision of DNS name resolution and load balancing infrastructure in your cluster. See Requirements for a cluster using the platform "none" option in the "Additional resources" section for more information. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. Additional resources Requirements for a cluster using the platform "none" option Increase the network MTU Adding worker nodes to single-node OpenShift clusters 1.3. About FIPS compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. Federal Information Processing Standards (FIPS) compliance is one of the most critical components required in highly secure environments to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 1.4. Configuring FIPS through the Agent-based Installer During a cluster deployment, the Federal Information Processing Standards (FIPS) change is applied when the Red Hat Enterprise Linux CoreOS (RHCOS) machines are deployed in your cluster. For Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. You can enable FIPS mode through the preferred method of install-config.yaml and agent-config.yaml : You must set value of the fips field to True in the install-config.yaml file: Sample install-config.yaml.file apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True Optional: If you are using the GitOps ZTP manifests, you must set the value of fips as True in the Agent-install.openshift.io/install-config-overrides field in the agent-cluster-install.yaml file: Sample agent-cluster-install.yaml file apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{"fips": True}' name: sno-cluster namespace: sno-cluster-test Additional resources OpenShift Security Guide Book Support for FIPS cryptography 1.5. Host configuration You can make additional configurations for each host on the cluster in the agent-config.yaml file, such as network configurations and root device hints. Important For each host you configure, you must provide the MAC address of an interface on the host to specify which host you are configuring. 1.5.1. Host roles Each host in the cluster is assigned a role of either master or worker . You can define the role for each host in the agent-config.yaml file by using the role parameter. If you do not assign a role to the hosts, the roles will be assigned at random during installation. It is recommended to explicitly define roles for your hosts. The rendezvousIP must be assigned to a host with the master role. This can be done manually or by allowing the Agent-based Installer to assign the role. Important You do not need to explicitly define the master role for the rendezvous host, however you cannot create configurations that conflict with this assignment. For example, if you have 4 hosts with 3 of the hosts explicitly defined to have the master role, the last host that is automatically assigned the worker role during installation cannot be configured as the rendezvous host. Sample agent-config.yaml file apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8 1.5.2. About root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 1.3. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. If you use the udevadm command to retrieve the wwn value, and the command outputs a value for ID_WWN_WITH_EXTENSION , then you must use this value to specify the wwn subfield. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master rootDeviceHints: deviceName: "/dev/sda" 1.6. About networking The rendezvous IP must be known at the time of generating the agent ISO, so that during the initial boot all the hosts can check in to the assisted service. If the IP addresses are assigned using a Dynamic Host Configuration Protocol (DHCP) server, then the rendezvousIP field must be set to an IP address of one of the hosts that will become part of the deployed control plane. In an environment without a DHCP server, you can define IP addresses statically. In addition to static IP addresses, you can apply any network configuration that is in NMState format. This includes VLANs and NIC bonds. 1.6.1. DHCP Preferred method: install-config.yaml and agent-config.yaml You must specify the value for the rendezvousIP field. The networkConfig fields can be left blank: Sample agent-config.yaml.file apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 1 The IP address for the rendezvous host. 1.6.2. Static networking Preferred method: install-config.yaml and agent-config.yaml Sample agent-config.yaml.file cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.1 6 -hop-interface: eno1 table-id: 254 EOF 1 If a value is not specified for the rendezvousIP field, one address will be chosen from the static IP addresses specified in the networkConfig fields. 2 The MAC address of an interface on the host, used to determine which host to apply the configuration to. 3 The static IP address of the target bare metal host. 4 The static IP address's subnet prefix for the target bare metal host. 5 The DNS server for the target bare metal host. 6 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. Optional method: GitOps ZTP manifests The optional method of the GitOps ZTP custom resources comprises 6 custom resources; you can configure static IPs in the nmstateconfig.yaml file. apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.122.1 4 -hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5 1 The static IP address of the target bare metal host. 2 The static IP address's subnet prefix for the target bare metal host. 3 The DNS server for the target bare metal host. 4 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 5 The MAC address of an interface on the host, used to determine which host to apply the configuration to. The rendezvous IP is chosen from the static IP addresses specified in the config fields. 1.7. Requirements for a cluster using the platform "none" option This section describes the requirements for an Agent-based OpenShift Container Platform installation that is configured to use the platform none option. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 1.7.1. Platform "none" DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The control plane and compute machines Reverse DNS resolution is also required for the Kubernetes API, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. The following DNS records are required for an OpenShift Container Platform cluster using the platform none option and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.4. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. 1.7.1.1. Example DNS configuration for platform "none" clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform using the platform none option. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a platform "none" cluster The following example is a BIND zone file that shows sample A records for name resolution in a cluster using the platform none option. Example 1.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 5 6 Provides name resolution for the control plane machines. 7 8 Provides name resolution for the compute machines. Example DNS PTR record configuration for a platform "none" cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a cluster using the platform none option. Example 1.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 4 5 Provides reverse DNS resolution for the control plane machines. 6 7 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 1.7.2. Platform "none" Load balancing requirements Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note These requirements do not apply to single-node OpenShift clusters using the platform none option. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 1.5. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 1.6. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 1.7.2.1. Example load balancer configuration for platform "none" clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters using the platform none option. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 1.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 Port 22623 handles the machine config server traffic and points to the control plane machines. 3 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 4 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 1.8. Example: Bonds and VLAN interface node network configuration The following agent-config.yaml file is an example of a manifest for bond and VLAN interfaces. apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: "150" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.10.10.10 9 -hop-interface: bond0.300 10 table-id: 254 1 3 Name of the interface. 2 The type of interface. This example creates a VLAN. 4 The type of interface. This example creates a bond. 5 The mac address of the interface. 6 The mode attribute specifies the bonding mode. 7 Specifies the MII link monitoring frequency in milliseconds. This example inspects the bond link every 150 milliseconds. 8 Optional: Specifies the search and server settings for the DNS server. 9 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 10 hop interface for the node traffic. 1.9. Example: Bonds and SR-IOV dual-nic node network configuration Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following agent-config.yaml file is an example of a manifest for dual port NIC with a bond and SR-IOV interfaces: apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field contains information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates an ethernet interface. 5 Set this to false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set this to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Additional resources Configuring network bonding 1.10. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 architecture: amd64 controlPlane: 4 name: master replicas: 1 5 architecture: amd64 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{"auths": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 This parameter controls the number of compute machines that the Agent-based installation waits to discover before triggering the installation process. It is the number of compute machines that must be booted with the generated ISO. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 8 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 10 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 11 You must set the platform to none for a single-node cluster. You can set the platform to vsphere , baremetal , or none for multi-node clusters. Note If you set the platform to vsphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 13 This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.11. Validation checks before agent ISO creation The Agent-based Installer performs validation checks on user defined YAML files before the ISO is created. Once the validations are successful, the agent ISO is created. install-config.yaml baremetal , vsphere and none platforms are supported. The networkType parameter must be OVNKubernetes in the case of none platform. apiVIPs and ingressVIPs parameters must be set for bare metal and vSphere platforms. Some host-specific fields in the bare metal platform configuration that have equivalents in agent-config.yaml file are ignored. A warning message is logged if these fields are set. agent-config.yaml Each interface must have a defined MAC address. Additionally, all interfaces must have a different MAC address. At least one interface must be defined for each host. World Wide Name (WWN) vendor extensions are not supported in root device hints. The role parameter in the host object must have a value of either master or worker . 1.11.1. ZTP manifests agent-cluster-install.yaml For IPv6, the only supported value for the networkType parameter is OVNKubernetes . The OpenshiftSDN value can be used only for IPv4. cluster-image-set.yaml The ReleaseImage parameter must match the release defined in the installer. 1.12. steps Installing a cluster Installing a cluster with customizations | [
"apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True",
"apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{\"fips\": True}' name: sno-cluster namespace: sno-cluster-test",
"apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8",
"- name: master-0 role: master rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1",
"cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 6 next-hop-interface: eno1 table-id: 254 EOF",
"apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: \"150\" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.10.10.10 9 next-hop-interface: bond0.300 10 table-id: 254",
"apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 architecture: amd64 controlPlane: 4 name: master replicas: 1 5 architecture: amd64 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{\"auths\": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14",
"networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-to-install-with-agent-based-installer |
11.2. Hardware Requirements | 11.2. Hardware Requirements For installation of Red Hat Enterprise Linux on IBM Power Systems servers, Red Hat supports hard drives connected by a standard internal interface, such as SCSI, SATA, or SAS. Fibre Channel Host Bus Adapters and multipath devices are supported. Vendor-provided drivers may be required for certain hardware. Virtualized installation on Power Systems servers is also supported when using Virtual SCSI (vSCSI) adapters in virtual client LPARs. Note that Red Hat does not support installation to USB drives or SD memory cards. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-supported_installation_hardware-ppc |
6.11. Affinity Groups | 6.11. Affinity Groups Affinity groups help you determine where selected virtual machines run in relation to each other and specified hosts. This capability helps manage workload scenarios such as licensing requirements, high-availability workloads, and disaster recovery. The VM Affinity Rule When you create an affinity group, you select the virtual machines that belong to the group. To define where these virtual machines can run in relation to each other , you enable a VM Affinity Rule : A positive rule tries to run the virtual machines together on a single host; a negative affinity rule tries to run the virtual machines apart on separate hosts. If the rule cannot be fulfilled, the outcome depends on whether the weight or filter module is enabled. The Host Affinity Rule Optionally, you can add hosts to the affinity group. To define where virtual machines in the group can run in relation to hosts in the group , you enable a Host Affinity Rule : A positive rule tries to run the virtual machines on hosts in the affinity group; a negative affinity rule tries to run the virtual machines on hosts that are not in the affinity group. If the rule cannot be fulfilled, the outcome depends on whether the weight or filter module is enabled. The Default Weight Module By default, both rules apply the weight module in the cluster's scheduling policy. With the weight module, the scheduler attempts to fulfill a rule, but allows the virtual machines in the affinity group to run anyway if the rule cannot be fulfilled. For example, with a positive VM Affinity Rule and the weight module enabled, the scheduler tries to run all of the affinity group's virtual machines on a single host. However, if a single host does not have sufficient resources for this, the scheduler runs the virtual machines on multiple hosts. For this module to work, the weight module section of the scheduling policies must contain the VmAffinityGroups and VmToHostsAffinityGroups keywords. The Enforcing Option and Filter Module Both rules have an Enforcing option which applies the filter module in the cluster's scheduling policy. The filter module overrides the weight module. With the filter module enabled, the scheduler requires that a rule be fulfilled. If a rule cannot be fulfilled, the filter module prevents the virtual machines in the affinity group from running. For example, with a positive Host Affinity Rule and Enforcing enabled (the filter module enabled), the scheduler requires the affinity group's virtual machines to run on hosts that are part of the affinity group. However, if those hosts are down, the scheduler does not run the virtual machines at all. For this module to work, the filter module section of the scheduling policies must contain the VmAffinityGroups and VmToHostsAffinityGroups keywords. Examples To see how these rules and options can be used with one another, see Section 6.11.4, "Affinity Groups Examples" . Warning An affinity label is functionally the same as an affinity group with a positive Host Affinity Rule and Enforcing enabled. For affinity labels to work, the filter module section of the scheduling policies must contain Label . If an affinity group and affinity label conflict with each other, the affected virtual machines do not run. To help prevent, troubleshoot, and resolve conflicts, see Section 6.11.5, "Affinity Groups Troubleshooting" . Important Each rule is affected by the weight and filter modules in the cluster's scheduling policy. For the VM Affinity Rule rule to work, the scheduling policy must have the VmAffinityGroups keyword in its Weight module and Filter module sections. For the Host Affinity Rule to work, the scheduling policy must have the VmToHostsAffinityGroups keyword in its Weight module and Filter module sections. For more information, see Scheduling Policies in the Administration Guide . Note Affinity groups apply to virtual machines on the cluster level. Moving a virtual machine from one cluster to another removes it from the affinity groups in the original cluster. Virtual machines do not have to restart for the affinity group rules to take effect. 6.11.1. Creating an Affinity Group You can create new affinity groups in the Administration Portal. Creating Affinity Groups Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Affinity Groups tab. Click New . Enter a Name and Description for the affinity group. From the VM Affinity Rule drop-down, select Positive to apply positive affinity or Negative to apply negative affinity. Select Disable to disable the affinity rule. Select the Enforcing check box to apply hard enforcement, or ensure this check box is cleared to apply soft enforcement. Use the drop-down list to select the virtual machines to be added to the affinity group. Use the + and - buttons to add or remove additional virtual machines. Click OK . 6.11.2. Editing an Affinity Group Editing Affinity Groups Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Affinity Groups tab. Click Edit . Change the VM Affinity Rule drop-down and Enforcing check box to the preferred values and use the + and - buttons to add or remove virtual machines to or from the affinity group. Click OK . 6.11.3. Removing an Affinity Group Removing Affinity Groups Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Affinity Groups tab. Click Remove . Click OK . The affinity policy that applied to the virtual machines that were members of that affinity group no longer applies. 6.11.4. Affinity Groups Examples The following examples illustrate how to apply affinity rules for various scenarios, using the different features of the affinity group capability described in this chapter. Example 6.1. High Availability Dalia is the DevOps engineer for a startup. For high availability, a particular system's two virtual machines should run on separate hosts anywhere in the cluster. Dalia creates an affinity group named "high availability" and does the following: Adds the two virtual machines, VM01 and VM02 , to the affinity group. Sets VM Affinity to Negative so the virtual machines try to run on separate hosts. Leaves Enforcing unchecked (disabled) so both virtual machines can continue running in case only one host is available during an outage. Leaves the Hosts list empty so the virtual machines run on any host in the cluster. Example 6.2. Performance Sohni is a software developer who uses two virtual machines to build and test his software many times each day. There is heavy network traffic between these two virtual machines. Running the machines on the same host reduces both network traffic and the effects of network latency on the build and test process. Using high-specification hosts (faster CPUs, SSDs, and more memory) further accelerates this process. Sohni creates an affinity group called "build and test" and does the following: Adds VM01 and VM02 , the build and test virtual machines, to the affinity group. Adds the high-specification hosts, host03 , host04 , and host05 , to the affinity group. Sets VM affinity to Positive so the virtual machines try to run on the same host, reducing network traffic and latency effects. Sets Host affinity to Positive so the virtual machines try to run on the high specification hosts, accelerating the process. Leaves Enforcing unchecked (disabled) for both rules so the virtual machines can run if the high-specification hosts are not available. Example 6.3. Licensing Bandile, a software asset manager, helps his organization comply with the restrictive licensing requirements of a 3D imaging software vendor. These terms require the virtual machines for its licensing server, VM-LS , and imaging workstations, VM-WS # , to run on the same host. Additionally, the physical CPU-based licensing model requires that the workstations run on either of two GPU-equipped hosts, host-gpu-primary or host-gpu-backup . To meet these requirements, Bandile creates an affinity group called "3D seismic imaging" and does the following: Adds the previously mentioned virtual machines and hosts to the affinity group. Sets VM affinity to Positive and selects Enforcing so the licensing server and workstations must run together on one of the hosts, not on multiple hosts. Sets Host affinity to Positive and selects Enforcing so the virtual machines must run on either of the GPU-equipped the hosts, not other hosts in the cluster. 6.11.5. Affinity Groups Troubleshooting To help prevent problems with affinity groups Plan and document the scenarios and outcomes you expect when using affinity groups. Verify and test the outcomes under a range of conditions. Follow change management best practices. Only use the Enforcing option if it is required. If you observe problems with virtual machines not running Verify that the cluster has a scheduling policy whose weight module and filter module sections contain VmAffinityGroups and VmToHostsAffinityGroups . For more information, see Explanation of Settings in the New Scheduling Policy and Edit Scheduling Policy Window in the Administration Guide . Check for conflicts between affinity labels and affinity groups. For possible conflicts between affinity labels and affinity groups Understand that an affinity label is the equivalent of an affinity group with a Host affinity rule that is Positive and has Enforcing enabled. Understand that if an affinity label and affinity group conflict with each other, the intersecting set of virtual machines do not run. Determine whether a conflict is possible: Inspect the filter module section of the cluster's scheduling policies. These must contain both a Label keyword and a VmAffinityGroups OR VmToHostsAffinityGroups keyword. Otherwise, a conflict is not possible . (The presence of VmAffinityGroups and VmToHostsAffinityGroups in the weight module section does not matter because Label in a filter module section would override them.) Inspect the affinity groups. They must contain a rule that has Enforcing enabled. Otherwise, a conflict is not possible . If a conflict is possible, identify the set of virtual machines that might be involved: Inspect the affinity labels and groups. Make a list of virtual machines that are members of both an affinity label and an affinity group with an Enforcing option enabled. For each host and virtual machine in this intersecting set, analyze the conditions under which a potential conflict occurs. Determine whether the actual non-running virtual machines match the ones in the analysis. Finally, restructure the affinity groups and affinity labels to help avoid unintended conflicts. Verify that any changes produce the expected results under a range of conditions. If you have overlapping affinity groups and affinity labels, it can be easier to view them in one place as affinity groups. Consider converting an affinity label into an equivalent affinity group, which has a Host affinity rule with Positive selected and Enforcing enabled. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-affinity_groups |
Red Hat Data Grid 8.5 Release Notes | Red Hat Data Grid 8.5 Release Notes Red Hat Data Grid 8.5 Get release information for Data Grid 8.5 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/red_hat_data_grid_8.5_release_notes/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/packaging_red_hat_build_of_openjdk_11_applications_in_containers/making-open-source-more-inclusive |
Chapter 8. Publishing on the catalog | Chapter 8. Publishing on the catalog After uploading your test result files, your application is scanned for vulnerabilities. When the scanning is complete, you can publish your product on the Red Hat Ecosystem Catalog . To receive RHOSO certification, ensure: All required tests have been run successfully. Red Hat has reviewed and validated your test configuration report. Perform the following steps to publish your product on the catalog: Procedure Navigate to your Product listing page. Click Publish . Your certified application is now published on the Red Hat Ecosystem Catalog . | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_services_on_openshift_certification_workflow_guide/publishing-on-the-catalog_uploading-the-test-results-to-red-hat-certification-portal |
3.2. Listing Data Centers | 3.2. Listing Data Centers This Ruby example lists the data centers. # Get the reference to the root of the services tree: system_service = connection.system_service # Get the reference to the service that manages the # collection of data centers: dcs_service = system_service.data_centers_service # Retrieve the list of data centers and for each one # print its name: dcs = dcs_service.list dcs.each do |dc| puts dc.name end In an environment with only the Default data center, the example outputs: For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4/DataCentersService:list . | [
"Get the reference to the root of the services tree: system_service = connection.system_service Get the reference to the service that manages the collection of data centers: dcs_service = system_service.data_centers_service Retrieve the list of data centers and for each one print its name: dcs = dcs_service.list dcs.each do |dc| puts dc.name end",
"Default"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/listing_data_centers |
Appendix D. LVM Object Tags | Appendix D. LVM Object Tags An LVM tag is a word that can be used to group LVM2 objects of the same type together. Tags can be attached to objects such as physical volumes, volume groups, and logical volumes. Tags can be attached to hosts in a cluster configuration. Tags can be given on the command line in place of PV, VG or LV arguments. Tags should be prefixed with @ to avoid ambiguity. Each tag is expanded by replacing it with all objects possessing that tag which are of the type expected by its position on the command line. As of the Red Hat Enterprise Linux 6.1 release, LVM tags are strings of up to 1024 characters (for earlier releases the upper length limit was 128 characters). LVM tags cannot start with a hyphen. A valid tag can consist of a limited range of characters only. For the Red Hat Enterprise Linux 6.0 release, the allowed characters are [A-Za-z0-9_+.-]. As of the Red Hat Enterprise Linux 6.1 release, the list of allowed characters has been extended, and tags can contain the "/", "=", "!", ":", "#", and "&" characters. Only objects in a volume group can be tagged. Physical volumes lose their tags if they are removed from a volume group; this is because tags are stored as part of the volume group metadata and that is deleted when a physical volume is removed. The following command lists all the logical volumes with the database tag. The following command lists the currently active host tags. D.1. Adding and Removing Object Tags To add or delete tags from physical volumes, use the --addtag or --deltag option of the pvchange command. To add or delete tags from volume groups, use the --addtag or --deltag option of the vgchange or vgcreate commands. To add or delete tags from logical volumes, use the --addtag or --deltag option of the lvchange or lvcreate commands. As of the Red Hat Enterprise Linux 6.1 release, you can specify multiple --addtag and --deltag arguments within a single pvchange , vgchange , or lvchange command. For example, the following command deletes the tags T9 and T10 and adds the tags T13 and T14 to the volume group grant . | [
"lvs @database",
"lvm tags",
"vgchange --deltag T9 --deltag T10 --addtag T13 --addtag T14 grant"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_tags |
function::task_prio | function::task_prio Name function::task_prio - The priority value of the task Synopsis Arguments task task_struct pointer Description This function returns the priority value of the given task. | [
"task_prio:long(task:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-prio |
7.125. libusb1 | 7.125. libusb1 7.125.1. RHBA-2013:0310 - libusb1 bug fix update Updated libusb1 packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The libusb1 packages provide a library to communicate with USB devices from userspace. Bug Fixes BZ# 820205 Prior to this update, the usbredir network protocol caused a conflict with the libusb library. As a consequence, SPICE USB-redirection failed with the following errors in the virt-viewer tool: usbredirhost error: submitting bulk transfer on ep 02: -1" when trying to redirect one USB device to two guests simultaneously. This update modifies the underlying code to send the error message "Device is busy" and fail after the second attempt. BZ#830751 Prior to this update, USB Request Blocks (URBs) from the user space were not allowed to have transfer buffers larger than an arbitrary maximum. As a consequence, attempting to redirect certain USB mass-storage devices could fail. This update modifies the underlying code to allow programs to submit URBs of any size. If there is not sufficient memory available, the submission fails with an ENOMEM error. In addition, this update also replaces the old limits on individual transfer buffers with a single global limit of 16MB on the total amount of memory in use by the USB file system (usbfs) to prevent programs from submitting a lot of small URBs and so using all the DMA-able kernel memory. All users of libusb1 are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libusb1 |
End-user Guide | End-user Guide Red Hat CodeReady Workspaces 2.15 Using Red Hat CodeReady Workspaces 2.15 Robert Kratky [email protected] Fabrice Flore-Thebault [email protected] Jana Vrbkova [email protected] Max Leonov [email protected] Red Hat Developer Group Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/end-user_guide/index |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 4.1-1 Wed Aug 7 2019 Steven Levine Preparing document for 7.7 GA publication. Revision 3.1-2 Thu Oct 4 2018 Steven Levine Preparing document for 7.6 GA publication. Revision 2.1-1 Thu Mar 15 2018 Steven Levine Preparing document for 7.5 GA publication. Revision 2.1-0 Thu Dec 14 2017 Steven Levine Preparing document for 7.5 Beta publication. Revision 0.6-5 Wed Nov 22 2017 Steven Levine Updated version for 7.4. Revision 0.6-3 Thu Jul 27 2017 Steven Levine Document version for 7.4 GA publication. Revision 0.6-1 Wed May 10 2017 Steven Levine Preparing document for 7.4 Beta publication. Revision 0.5-9 Mon Dec 5 2016 Steven Levine Updated Version for 7.3. Revision 0.5-7 Mon Oct 17 2016 Steven Levine Version for 7.3 GA publication. Revision 0.5-6 Thu Aug 18 2016 Steven Levine Preparing document for 7.3 Beta publication. Revision 0.3-2 Mon Nov 9 2015 Steven Levine Preparing document for 7.2 GA publication. Revision 0.3-0 Wed Aug 19 2015 Steven Levine Preparing document for 7.2 Beta publication. Revision 0.2-6 Mon Feb 16 2015 Steven Levine Version for 7.1 GA release Revision 0.2-5 Thu Dec 11 2014 Steven Levine Version for 7.1 Beta release Revision 0.2-4 Thu Dec 04 2014 Steven Levine Version for 7.1 Beta release Revision 0.1-12 Tue Jun 03 2014 John Ha Version for 7.0 GA Release Revision 0.1-6 Mon Jun 13 2013 John Ha Build for beta of Red Hat Enterprise Linux 7 Revision 0.1-1 Wed Jan 16 2013 John Ha Branched from Red Hat Enterprise Linux 6 version of this Document | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/appe-publican-revision_history |
12.6. Common Mistakes to Avoid | 12.6. Common Mistakes to Avoid It is very common for beginners to make mistakes when editing BIND configuration files. Be sure to avoid the following issues: Take care to increment the serial number when editing a zone file. If the serial number is not incremented, the master nameserver has the correct, new information, but the slave nameservers are never notified of the change and do not attempt to refresh their data of that zone. Be careful to use ellipses and semi-colons correctly in the /etc/named.conf file. An omitted semi-colon or unclosed ellipse section can cause named to refuse to start. Remember to place periods ( . ) in zone files after all FQDNs and omit them on hostnames. A period at the end of a domain name denotes a fully qualified domain name. If the period is omitted, then named appends the name of the zone or the USDORIGIN value to complete it. If a firewall is blocking connections from the named program to other nameservers, edit its configuration file. By default, BIND version 9 uses random ports above 1024 to query other nameservers. Some firewalls, however, expect all nameservers to communicate using only port 53. To force named to use port 53, add the following line to the options statement of /etc/named.conf : | [
"query-source address * port 53;"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-bind-mistakes |
C.4. Failure Recovery and Independent Subtrees | C.4. Failure Recovery and Independent Subtrees In most enterprise environments, the normal course of action for failure recovery of a service is to restart the entire service if any component in the service fails. For example, in Example C.6, "Service foo Normal Failure Recovery" , if any of the scripts defined in this service fail, the normal course of action is to restart (or relocate or disable, according to the service recovery policy) the service. However, in some circumstances certain parts of a service may be considered non-critical; it may be necessary to restart only part of the service in place before attempting normal recovery. To accomplish that, you can use the __independent_subtree attribute. For example, in Example C.7, "Service foo Failure Recovery with __independent_subtree Attribute" , the __independent_subtree attribute is used to accomplish the following actions: If script:script_one fails, restart script:script_one, script:script_two, and script:script_three. If script:script_two fails, restart just script:script_two. If script:script_three fails, restart script:script_one, script:script_two, and script:script_three. If script:script_four fails, restart the whole service. Example C.6. Service foo Normal Failure Recovery Example C.7. Service foo Failure Recovery with __independent_subtree Attribute In some circumstances, if a component of a service fails you may want to disable only that component without disabling the entire service, to avoid affecting other services that use other components of that service. As of the Red Hat Enterprise Linux 6.1 release, you can accomplish that by using the __independent_subtree="2" attribute, which designates the independent subtree as non-critical. Note You may only use the non-critical flag on singly-referenced resources. The non-critical flag works with all resources at all levels of the resource tree, but should not be used at the top level when defining services or virtual machines. As of the Red Hat Enterprise Linux 6.1 release, you can set maximum restart and restart expirations on a per-node basis in the resource tree for independent subtrees. To set these thresholds, you can use the following attributes: __max_restarts configures the maximum number of tolerated restarts prior to giving up. __restart_expire_time configures the amount of time, in seconds, after which a restart is no longer attempted. | [
"<service name=\"foo\"> <script name=\"script_one\" ...> <script name=\"script_two\" .../> </script> <script name=\"script_three\" .../> </service>",
"<service name=\"foo\"> <script name=\"script_one\" __independent_subtree=\"1\" ...> <script name=\"script_two\" __independent_subtree=\"1\" .../> <script name=\"script_three\" .../> </script> <script name=\"script_four\" .../> </service>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clust-rsc-failure-rec-ca |
Chapter 1. Operators overview | Chapter 1. Operators overview Operators are among the most important components of OpenShift Dedicated. They are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and the OpenShift CLI ( oc ). They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. Operators are designed specifically for Kubernetes-native applications to implement and automate common Day 1 operations, such as installation and configuration. Operators can also automate Day 2 operations, such as autoscaling up or down and creating backups. All of these activities are directed by a piece of software running on your cluster. While both follow similar Operator concepts and goals, Operators in OpenShift Dedicated are managed by two different systems, depending on their purpose: Cluster Operators Managed by the Cluster Version Operator (CVO) and installed by default to perform cluster functions. Optional add-on Operators Managed by Operator Lifecycle Manager (OLM) and can be made accessible for users to run in their applications. Also known as OLM-based Operators . 1.1. For developers As an Operator author, you can perform the following development tasks for OLM-based Operators: Install Operator SDK CLI . Create Go-based Operators , Ansible-based Operators , and Helm-based Operators . Use Operator SDK to build, test, and deploy an Operator . Create an application from an installed Operator through the web console . 1.2. For administrators As an administrator with the dedicated-admin role, you can perform the following Operator tasks: Manage custom catalogs . Install an Operator from OperatorHub . View Operator status . Manage Operator conditions . Upgrade installed Operators . Delete installed Operators . Configure proxy support . 1.3. steps To understand more about Operators, see What are Operators? | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/operators/operators-overview |
Chapter 3. Deploying OpenShift Container Storage on Red Hat OpenStack Platform in external mode | Chapter 3. Deploying OpenShift Container Storage on Red Hat OpenStack Platform in external mode Red Hat OpenShift Container Storage can use an externally hosted Red Hat Ceph Storage (RHCS) cluster as the storage provider on Red Hat OpenStack Platform. See Planning your deployment for more information. For instructions regarding how to install a RHCS 4 cluster, see Installation guide . Follow these steps to deploy OpenShift Container Storage in external mode: Install the OpenShift Container Storage Operator . Create the OpenShift Container Storage Cluster Service . 3.1. Installing Red Hat OpenShift Container Storage Operator You can install Red Hat OpenShift Container Storage Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. You have satisfied any additional requirements required. For more information, see Planning your deployment . Note When you need to override the cluster-wide default node selector for OpenShift Container Storage, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Container Storage resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Container Storage chapter in Managing and Allocating Storage Resources guide. Procedure Log in to OpenShift Web Console. Click Operators OperatorHub . Search for OpenShift Container Storage from the list of operators and click on it. Click Install . Set the following options on the Install Operator page: Channel as stable-4.8 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it will be created during the operator installation. Approval Strategy as Automatic or Manual . Click Install . If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your operator without any intervention. If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the operator updated to the new version. Verification step Verify that the OpenShift Container Storage Operator shows a green tick indicating successful installation. 3.2. Creating an OpenShift Container Storage Cluster service for external mode You need to create a new OpenShift Container Storage cluster service after you install OpenShift Container Storage operator on OpenShift Container Platform deployed on Red Hat OpenStack platform. Prerequisites Ensure the OpenShift Container Platform version is 4.8 or above before deploying OpenShift Container Storage 4.8. OpenShift Container Storage operator must be installed. For more information, see Installing OpenShift Container Storage Operator using the Operator Hub . Red Hat Ceph Storage version 4.2z1 or later is required for the external cluster. For more information, see this knowledge base article on Red Hat Ceph Storage releases and corresponding Ceph package versions . If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS PVC creation in external mode. For more details, see Troubleshooting CephFS PVC creation in external mode . Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access . Red Hat recommends that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation. The external Ceph cluster should have an existing RBD pool pre-configured for use. If it does not exist, contact your Red Hat Ceph Storage administrator to create one before you move ahead with OpenShift Container Storage deployment. Red Hat recommends to use a separate pool for each OpenShift Container Storage cluster. Procedure Click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Container Storage Create Instance link of Storage Cluster. Select Mode as External . By default, Internal is selected as deployment mode. Figure 3.1. Connect to external cluster section on Create Storage Cluster form In the Connect to external cluster section, click on the Download Script link to download the python script for extracting Ceph cluster details. For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with admin key . Run the following command on the RHCS node to view the list of available arguments. Important Use python instead of python3 if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster. Note You can also run the script from inside a MON container (containerized deployment) or from a MON node (rpm deployment). To retrieve the external cluster details from the RHCS cluster, run the following command For example: In the above example, --rbd-data-pool-name is a mandatory parameter used for providing block storage in OpenShift Container Storage. --rgw-endpoint is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Container Storage. Provide the endpoint in the following format: <ip_address>:<port> --monitoring-endpoint is optional. It is the IP address of the active ceph-mgr reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. -- run-as-user is an optional parameter used for providing a name for the Ceph user which is created by the script. If this parameter is not specified, a default user name client.healthchecker is created. The permissions for the new user is set as: caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool= RGW_POOL_PREFIX.rgw.meta , allow r pool= .rgw.root , allow rw pool= RGW_POOL_PREFIX.rgw.control , allow rx pool= RGW_POOL_PREFIX.rgw.log , allow x pool= RGW_POOL_PREFIX.rgw.buckets.index Example of JSON output generated using the python script: Save the JSON output to a file with .json extension Note For OpenShift Container Storage to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remains unchanged on the RHCS external cluster after the storage cluster creation. Click External cluster metadata Browse to select and upload the JSON file. The content of the JSON file is populated and displayed in the text box. Figure 3.2. Json file content Click Create . The Create button is enabled only after you upload the .json file. Verification steps Verify that the final Status of the installed storage cluster shows as Phase: Ready with a green tick mark. Click Operators Installed Operators Storage Cluster link to view the storage cluster installation status. Alternatively, when you are on the Operator Details tab, you can click on the Storage Cluster tab to view the status. To verify that OpenShift Container Storage, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Container Storage installation . 3.3. Verifying your OpenShift Container Storage installation for external mode Use this section to verify that OpenShift Container Storage is deployed correctly. 3.3.1. Verifying the state of the pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop down list. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, "Pods corresponding to OpenShift Container Storage components" Verify that the following pods are in running state: Table 3.1. Pods corresponding to OpenShift Container Storage components Component Corresponding pods OpenShift Container Storage Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) Note If an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created. rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) 3.3.2. Verifying that the OpenShift Container Storage cluster is healthy Click Storage Overview from the left pane of the OpenShift Web Console and click Block and File tab. In the Status card , verify that Storage Cluster has a green tick mark. In the Details card , verify that the cluster information is displayed as follows: Service Name OpenShift Container Storage Cluster Name ocs-external-storagecluster Provider OpenStack Mode External Version ocs-operator-4.8.0 For more information on the health of OpenShift Container Storage cluster using the Block and File dashboard, see Monitoring OpenShift Container Storage . 3.3.3. Verifying that the Multicloud Object Gateway is healthy Click Storage Overview from the left pane of the OpenShift Web Console and click the Object tab. In the Status card , verify that both Object Service and Data Resiliency are in Ready state (green tick). Figure 3.3. Health status card in Object Dashboard In the Details card , verify that the MCG information is displayed appropriately as follows: Service Name OpenShift Container Storage System Name Multicloud Object Gateway RADOS Object Gateway Provider OpenStack Version ocs-operator-4.8.0 Note The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details were included while deploying OpenShift Container Storage in external mode. For more information on the health of OpenShift Container Storage cluster using the object dashboard, see Monitoring OpenShift Container Storage . 3.3.4. Verifying that the storage classes are created and listed Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Container Storage cluster creation: ocs-external-storagecluster-ceph-rbd ocs-external-storagecluster-ceph-rgw ocs-external-storagecluster-cephfs openshift-storage.noobaa.io Note If an MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfs storage class will not be created. If an RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgw storage class will not be created. For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation 3.3.5. Verifying that Ceph cluster is connected Run the following command to verify if the OpenShift Container Storage cluster is connected to the external Red Hat Ceph Storage cluster. 3.3.6. Verifying that storage cluster is ready Run the following command to verify if the storage cluster is ready and the External option is set to true. 3.4. Uninstalling OpenShift Container Storage in external mode 3.4.1. Uninstalling OpenShift Container Storage in External mode Use the steps in this section to uninstall OpenShift Container Storage. Uninstalling OpenShift Container Storage does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful Note The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode. The below table provides information on the different values that can used with these annotations: Table 3.2. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user mode forced No Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively You can change the uninstall mode by editing the value of the annotation by using the following commands: Prerequisites Ensure that the OpenShift Container Storage cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Container Storage. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Container Storage. Procedure Delete the volume snapshots that are using OpenShift Container Storage. List the volume snapshots from all the namespaces From the output of the command, identify and delete the volume snapshots that are using OpenShift Container Storage. Delete PVCs and OBCs that are using OpenShift Container Storage. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Container Storage are deleted. If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Container Storage. See Removing monitoring stack from OpenShift Container Storage Delete OpenShift Container Platform Registry PVCs using OpenShift Container Storage. Removing OpenShift Container Platform registry from OpenShift Container Storage Delete OpenShift Container Platform logging PVCs using OpenShift Container Storage. Removing the cluster logging operator from OpenShift Container Storage Delete other PVCs and OBCs provisioned using OpenShift Container Storage. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Container Storage. The script ignores the PVCs and OBCs that are used internally by Openshift Container Storage. Delete the OBCs. Delete the PVCs. Ensure that you have removed any custom backing stores, bucket classes, etc., created in the cluster. Delete the Storage Cluster object and wait for the removal of the associated resources. Delete the namespace and wait until the deletion is complete. You will need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Container Storage, if the namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Confirm all PVs provisioned using OpenShift Container Storage are deleted. If there is any PV left in the Released state, delete it. Delete the Multicloud Object Gateway storageclass. Remove CustomResourceDefinitions . To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console, Click Storage . Verify that Overview no longer appears under Storage. 3.4.2. Removing monitoring stack from OpenShift Container Storage Use this section to clean up the monitoring stack from the OpenShift Container Storage. The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use OpenShift Container Platform monitoring stack. For more information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Edit the monitoring configmap . Remove any config sections that reference the OpenShift Container Storage storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Container Storage PVCs. List the pods consuming the PVC. In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Container Storage PVC. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. 3.4.3. Removing OpenShift Container Platform registry from OpenShift Container Storage Use this section to clean up OpenShift Container Platform registry from OpenShift Container Storage. If you want to configure an alternative storage, see image registry The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry should have been configured to use an OpenShift Container Storage PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. 3.4.4. Removing the cluster logging operator from OpenShift Container Storage To clean the cluster logging operator from the OpenShift Container Storage, follow the steps in the procedure. The PVCs created as a part of configuring cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance must be configured to use OpenShift Container Storage PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete PVCs. | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"python3 ceph-external-cluster-details-exporter.py --help",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> [optional arguments]",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"client.healthchecker\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"ceph-rbd\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}]",
"oc get cephcluster -n openshift-storage",
"NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH ocs-external-storagecluster-cephcluster 31m15s Connected Cluster connected successfully HEALTH_OK",
"oc get storagecluster -n openshift-storage",
"NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 31m15s Ready true 2021-02-29T20:43:04Z 4.8.0",
"oc annotate storagecluster ocs-external-storagecluster uninstall.ocs.openshift.io/mode=\"forced\" --overwrite storagecluster.ocs.openshift.io/ocs-external-storagecluster annotated",
"oc get volumesnapshot --all-namespaces",
"oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>",
"#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done",
"oc delete obc <obc name> -n <project name>",
"oc delete pvc <pvc name> -n <project-name>",
"oc delete -n openshift-storage storagecluster --all --wait=true",
"oc project default oc delete project openshift-storage --wait=true --timeout=5m",
"oc get project openshift-storage",
"oc get pv oc delete pv <pv name>",
"oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m",
"oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io --wait=true --timeout=5m",
"oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .",
". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .",
"oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Terminating 0 10h pod/alertmanager-main-1 3/3 Terminating 0 10h pod/alertmanager-main-2 3/3 Terminating 0 10h pod/cluster-monitoring-operator-84cd9df668-zhjfn 1/1 Running 0 18h pod/grafana-5db6fd97f8-pmtbf 2/2 Running 0 10h pod/kube-state-metrics-895899678-z2r9q 3/3 Running 0 10h pod/node-exporter-4njxv 2/2 Running 0 18h pod/node-exporter-b8ckz 2/2 Running 0 11h pod/node-exporter-c2vp5 2/2 Running 0 18h pod/node-exporter-cq65n 2/2 Running 0 18h pod/node-exporter-f5sm7 2/2 Running 0 11h pod/node-exporter-f852c 2/2 Running 0 18h pod/node-exporter-l9zn7 2/2 Running 0 11h pod/node-exporter-ngbs8 2/2 Running 0 18h pod/node-exporter-rv4v9 2/2 Running 0 18h pod/openshift-state-metrics-77d5f699d8-69q5x 3/3 Running 0 10h pod/prometheus-adapter-765465b56-4tbxx 1/1 Running 0 10h pod/prometheus-adapter-765465b56-s2qg2 1/1 Running 0 10h pod/prometheus-k8s-0 6/6 Terminating 1 9m47s pod/prometheus-k8s-1 6/6 Terminating 1 9m47s pod/prometheus-operator-cbfd89f9-ldnwc 1/1 Running 0 43m pod/telemeter-client-7b5ddb4489-2xfpz 3/3 Running 0 10h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0 Bound pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1 Bound pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2 Bound pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0 Bound pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1 Bound pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h",
"oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m",
"oc edit configs.imageregistry.operator.openshift.io",
". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .",
". . . storage: emptyDir: {} . . .",
"oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m",
"oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m",
"oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/deploying-openshift-container-storage-on-red-hat-openstack-platform-in-external-mode_external-osp |
Chapter 3. Important Changes to External Kernel Parameters | Chapter 3. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 7.4. These changes include added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. Updated /proc/sys/kernel entries hung_task_panic Controls the behavior of the kernel when an unresponsive task is detected. This file occurs if CONFIG_DETECT_HUNG_TASK is enabled. Format: { "0" | "1" } 0 - Continue operation. Default behavior. 1 - Panic immediately. hung_task_check_count Provides the upper bound on the number of tasks that are checked. This file occurs if CONFIG_DETECT_HUNG_TASK is enabled. hung_task_timeout_secs Checks interval. Reports a warning in case that a task in D state is not scheduled for longer time than this value. This file occurs if CONFIG_DETECT_HUNG_TASK is enabled. 0 - Infinite timeout - no checking done. hung_task_warning Provides the maximum number of warnings to report during a check interval. When this value is reached, no more warnings will be reported. This file occurs if CONFIG_DETECT_HUNG_TASK is enabled. -1 - Reports an infinite number of warnings. panic_on_rcu_stall When set to 1, calls the panic() function after RCU stall detection messages. This is useful to define the root cause of RCU stalls using a vmcore. 0 - Do not panic when RCU stall takes place. Default behavior. 1 - Panic after printing RCU stall messages. Updated /proc/sys/user entries You can use the files in the /proc/sys/user directory to override the default limits for the number of namespaces and other objects that have per user namespace limits. The purpose of these limits is to stop programs that malfunction and attempt to create a high number of objects. The default values of these limits are adjusted so that any program in normal operation cannot reach them. The creation of per user namespace objects is charged to the user in the user namespace who created the object and who verified to be below the per user limit in that user namespace. The creation of such objects happens in user namespaces and is also charged to all users who created user namespaces. This recursive counting of created objects ensures that creating a user namespace does not allow a user to exceed their current limits. The updated files in /proc/sys/user are: max_cgroup_namespaces The maximum number of control group namespaces that any user in the current user namespace can create. max_ipc_namespaces The maximum number of interprocess communication namespaces that any user in the current user namespace can create. max_mnt_namespaces The maximum number of mount namespaces that any user in the current user namespace can create. max_net_namespaces The maximum number of network namespaces that any user in the current user namespace can create. max_pid_namespaces The maximum number of process ID namespaces that any user in the current user namespace can create. max_user_namespaces The maximum number of user ID namespaces that any user in the current user namespace can create. max_uts_namespaces The maximum number of UNIX Timesharing System (UTS) namespaces that any user in the current user namespace can create. Kernel parameters acpi_force_table_verification [HW,ACPI] Enables table checksum verification during early stage. By default, disabled on the 32-bit AMD and Intel architecture due to early mapping size limitation. acpi_no_auto_ssdt [HW,ACPI] Disables automatic loading of Secondary System Description Table (SSDT). acpi_no_static_ssdt [HW,ACPI] Disables installation of static SSDTs at early boot time. By default, SSDTs contained in the Root System Description Table (RSDT) or eXtended System Descriptor Table (XSDT) are installed automatically and they appear in the /sys/firmware/acpi/tables directory. This option turns off this feature. Specifying this option does not affect dynamic table installation which installs SSDT tables to the /sys/firmware/acpi/tables/dynamic directory. irqaffinity= [SMP] Sets the default irq affinity mask in the following formats: Format: <cpu number>,..., <cpu number> or <cpu number>-<cpu number> You can use a positive range in ascending order or a combination. <cpu number>,...,<cpu number>-<cpu number> nokaslr [KNL]] Disables installation of static SSDTs at early boot time. By default, SSDTs contained in the RSDT or XSDT are installed automatically and they appear in the /sys/firmware/acpi/tables directory. Disables kernel and module base offset Address SpaceLayout Randomization (ASLR) if CONFIG_RANDOMIZE_BASE is set. nohibernate Disables hibernation and resume. crash_kexec_post_notifiers Runs kdump after running panic-notifiers and dumping kmsg. [PCI] hpbussize=nn Provides the minimum amount of additional bus numbers reserved for buses below a hotplug bridge. Default is 1. pcie_port_pm=[PCIE] PCIe port power management handling: Format: { "off" | "force" } off - Disables power management of all PCIe ports. 1 - Enables power management of all PCIe ports. sunrpc.svc_rpc_per_connection_limit=[NFS,SUNRPC] Limits the number of requests for the server to process in parallel from a single connection. The default value is 0 (no limit). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/chap-red_hat_enterprise_linux-7.4_release_notes-kernel_parameters_changes |
Chapter 15. catalog | Chapter 15. catalog This chapter describes the commands under the catalog command. 15.1. catalog list List services in the service catalog Usage: Table 15.1. Optional Arguments Value Summary -h, --help Show this help message and exit Table 15.2. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 15.3. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 15.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 15.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 15.2. catalog show Display service catalog details Usage: Table 15.6. Positional Arguments Value Summary <service> Service to display (type or name) Table 15.7. Optional Arguments Value Summary -h, --help Show this help message and exit Table 15.8. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 15.9. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 15.10. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 15.11. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack catalog list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]",
"openstack catalog show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <service>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/catalog |
Chapter 4. User-provisioned infrastructure | Chapter 4. User-provisioned infrastructure 4.1. Preparing to install a cluster on AWS You prepare to install an OpenShift Container Platform cluster on AWS by completing the following steps: Verifying internet connectivity for your cluster. Configuring an AWS account . Downloading the installation program. Note If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation . Installing the OpenShift CLI ( oc ). Note If you are installing in a disconnected environment, install oc to the mirror host. Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. Preparing the user-provisioned infrastructure. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, manually creating long-term credentials for AWS or configuring an AWS cluster to use short-term credentials with Amazon Web Services Security Token Service (AWS STS). 4.1.1. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.1.2. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.1.3. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.1.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.1.5. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 4.2. Installation requirements for user-provisioned infrastructure on AWS Before you begin an installation on infrastructure that you provision, be sure that your AWS environment meets the following installation requirements. For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 4.2.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 4.2.1.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.2.1.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 4.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 4.2.1.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 4.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 4.2.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 4.2.3. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 4.2.3.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 4.2.3.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 4.2.4. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 4.3. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribePublicIpv4Pools (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:DisassociateAddress (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 4.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 4.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener elasticloadbalancing:SetSecurityGroups Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 4.6. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagInstanceProfile iam:TagRole Note If you specify an existing IAM role in the install-config.yaml file, the following IAM permissions are not required: iam:CreateRole , iam:DeleteRole , iam:DeleteRolePolicy , and iam:PutRolePolicy . If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 4.7. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 4.8. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketTagging s3:PutEncryptionConfiguration Example 4.9. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 4.10. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 4.11. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 4.12. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 4.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 4.14. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 4.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 4.16. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole Example 4.17. Required permissions for enabling Bring your own public IPv4 addresses (BYOIP) feature for installation ec2:DescribePublicIpv4Pools ec2:DisassociateAddress 4.2.5. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . 4.3. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates In OpenShift Container Platform version 4.18, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 4.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You prepared the user-provisioned infrastructure. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 4.3.2. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 4.3.2.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 4.3.2.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . If you are installing a three-node cluster, modify the install-config.yaml file by setting the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 4.3.2.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.3.2.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 4.3.3. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 4.3.4. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 4.3.4.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 4.18. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.5. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 4.3.5.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 4.19. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.11" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.11" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . You can view details about your hosted zones by navigating to the AWS Route 53 console . See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 4.3.6. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 4.3.6.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 4.20. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.7. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 4.3.8. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 4.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-01bf6b6fca71a7dc3 ap-east-1 ami-0594c08334dcc4afb ap-northeast-1 ami-0313928874609075d ap-northeast-2 ami-09cfc5a33f840ce70 ap-northeast-3 ami-02fece2c48e16e9f2 ap-south-1 ami-063d0eaf658eb4dc5 ap-south-2 ami-0c4930cae17448786 ap-southeast-1 ami-068f696694b2fc0f1 ap-southeast-2 ami-04aee88a86e139991 ap-southeast-3 ami-0363d9df44ce25cd3 ap-southeast-4 ami-05b72aa8744449f86 ca-central-1 ami-0a7c95e80fb37ade8 ca-west-1 ami-0818def2f3d7a696d eu-central-1 ami-02c8714aef084ee90 eu-central-2 ami-083d349477a4e9f69 eu-north-1 ami-03f4002a3746bc66b eu-south-1 ami-038d816008adca0be eu-south-2 ami-099f491d6ab9706d0 eu-west-1 ami-0f0ebf16ff38e816f eu-west-2 ami-0abb7730ffd4d9944 eu-west-3 ami-032c22188cbfff12c il-central-1 ami-08171fe42c6af2676 me-central-1 ami-0f1c6a3d726f5b7b5 me-south-1 ami-019faf03d74520d13 sa-east-1 ami-01591af00107320c3 us-east-1 ami-08f1807771f4e468b us-east-2 ami-078e26f293629fe91 us-gov-east-1 ami-068e56023ec09c2b1 us-gov-west-1 ami-09ba2da65d9d836cf us-west-1 ami-01d1d2ed3d63466da us-west-2 ami-0d769ba340e913a8c Table 4.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-02d76a4f0c0ee24cd ap-east-1 ami-07e78c2c0f5f81a49 ap-northeast-1 ami-0e3a6e27f6940ab63 ap-northeast-2 ami-0116db61662393b23 ap-northeast-3 ami-07dd3d8930d1c27eb ap-south-1 ami-07121d273482babf9 ap-south-2 ami-084f561e41c26ab95 ap-southeast-1 ami-02301ea2b50fc247f ap-southeast-2 ami-0690a605a9bb33d00 ap-southeast-3 ami-08d243c0580c87c80 ap-southeast-4 ami-013dad9ce63ec3dc0 ca-central-1 ami-0238dbad4895283b7 ca-west-1 ami-0faded0cfdf14248a eu-central-1 ami-085a88c5d03df3675 eu-central-2 ami-08d6da8ffaa81e2b4 eu-north-1 ami-0077ccc8e7962b7ee eu-south-1 ami-02c649a544c9395f2 eu-south-2 ami-0a955bda5a7189ebd eu-west-1 ami-040969e306c9a3efa eu-west-2 ami-06b30fcc40988cc96 eu-west-3 ami-00dc4e0a7798ae0c5 il-central-1 ami-0c1adf273a43b58e2 me-central-1 ami-00817b16f81e58b86 me-south-1 ami-0f72a9bb1975ba0f9 sa-east-1 ami-083cf54e8ffc2d716 us-east-1 ami-0eebf083d985a0bcf us-east-2 ami-0b04071739ccf4af2 us-gov-east-1 ami-092fec5203140ddd8 us-gov-west-1 ami-078ee5edd87052e70 us-west-1 ami-0344d1d886514e258 us-west-2 ami-07ef3531e7692a7ae 4.3.8.1. AWS regions without a published RHCOS AMI You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs. A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file. 4.3.8.2. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.18.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 4.3.9. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 4.3.9.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 4.21. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 4.3.10. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 4.3.10.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 4.22. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.11. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 4.3.11.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 4.23. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.12. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. You can view details about the running instances that are created by using the AWS EC2 console . 4.3.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.3.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 4.3.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 4.3.15.1. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information. 4.3.15.1.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 4.3.15.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.3.16. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 4.3.17. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 4.3.18. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.3.19. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.3.20. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 4.3.21. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 4.4. Installing a cluster on AWS in a disconnected environment with user-provisioned infrastructure In OpenShift Container Platform version 4.18, you can install a cluster on Amazon Web Services (AWS) using infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the AWS APIs. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 4.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You prepared the user-provisioned infrastructure. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 4.4.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 4.4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.4.3. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 4.4.3.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 4.4.3.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- Add the image content resources: imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Use the imageContentSources section from the output of the command to mirror the repository or the values that you used when you mirrored the content from the media that you brought into your restricted network. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 4.4.3.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.4.3.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Manually creating long-term credentials 4.4.4. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 4.4.5. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 4.4.5.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 4.24. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 4.4.6. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 4.4.6.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 4.25. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.11" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.11" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 4.4.7. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 4.4.7.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 4.26. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile 4.4.8. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 4.4.9. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 4.5. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-01bf6b6fca71a7dc3 ap-east-1 ami-0594c08334dcc4afb ap-northeast-1 ami-0313928874609075d ap-northeast-2 ami-09cfc5a33f840ce70 ap-northeast-3 ami-02fece2c48e16e9f2 ap-south-1 ami-063d0eaf658eb4dc5 ap-south-2 ami-0c4930cae17448786 ap-southeast-1 ami-068f696694b2fc0f1 ap-southeast-2 ami-04aee88a86e139991 ap-southeast-3 ami-0363d9df44ce25cd3 ap-southeast-4 ami-05b72aa8744449f86 ca-central-1 ami-0a7c95e80fb37ade8 ca-west-1 ami-0818def2f3d7a696d eu-central-1 ami-02c8714aef084ee90 eu-central-2 ami-083d349477a4e9f69 eu-north-1 ami-03f4002a3746bc66b eu-south-1 ami-038d816008adca0be eu-south-2 ami-099f491d6ab9706d0 eu-west-1 ami-0f0ebf16ff38e816f eu-west-2 ami-0abb7730ffd4d9944 eu-west-3 ami-032c22188cbfff12c il-central-1 ami-08171fe42c6af2676 me-central-1 ami-0f1c6a3d726f5b7b5 me-south-1 ami-019faf03d74520d13 sa-east-1 ami-01591af00107320c3 us-east-1 ami-08f1807771f4e468b us-east-2 ami-078e26f293629fe91 us-gov-east-1 ami-068e56023ec09c2b1 us-gov-west-1 ami-09ba2da65d9d836cf us-west-1 ami-01d1d2ed3d63466da us-west-2 ami-0d769ba340e913a8c Table 4.6. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-02d76a4f0c0ee24cd ap-east-1 ami-07e78c2c0f5f81a49 ap-northeast-1 ami-0e3a6e27f6940ab63 ap-northeast-2 ami-0116db61662393b23 ap-northeast-3 ami-07dd3d8930d1c27eb ap-south-1 ami-07121d273482babf9 ap-south-2 ami-084f561e41c26ab95 ap-southeast-1 ami-02301ea2b50fc247f ap-southeast-2 ami-0690a605a9bb33d00 ap-southeast-3 ami-08d243c0580c87c80 ap-southeast-4 ami-013dad9ce63ec3dc0 ca-central-1 ami-0238dbad4895283b7 ca-west-1 ami-0faded0cfdf14248a eu-central-1 ami-085a88c5d03df3675 eu-central-2 ami-08d6da8ffaa81e2b4 eu-north-1 ami-0077ccc8e7962b7ee eu-south-1 ami-02c649a544c9395f2 eu-south-2 ami-0a955bda5a7189ebd eu-west-1 ami-040969e306c9a3efa eu-west-2 ami-06b30fcc40988cc96 eu-west-3 ami-00dc4e0a7798ae0c5 il-central-1 ami-0c1adf273a43b58e2 me-central-1 ami-00817b16f81e58b86 me-south-1 ami-0f72a9bb1975ba0f9 sa-east-1 ami-083cf54e8ffc2d716 us-east-1 ami-0eebf083d985a0bcf us-east-2 ami-0b04071739ccf4af2 us-gov-east-1 ami-092fec5203140ddd8 us-gov-west-1 ami-078ee5edd87052e70 us-west-1 ami-0344d1d886514e258 us-west-2 ami-07ef3531e7692a7ae 4.4.10. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 4.4.10.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 4.27. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 4.4.11. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 4.4.11.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 4.28. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] 4.4.12. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 4.4.12.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 4.29. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp 4.4.13. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. 4.4.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 4.4.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 4.4.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.4.15.2. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.4.15.2.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 4.4.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.4.16. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 4.4.17. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 4.4.18. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Register your cluster on the Cluster registration page. 4.4.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.4.20. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. Additional resources See About remote health monitoring for more information about the Telemetry service 4.4.21. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 4.4.22. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager in disconnected environments . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster If necessary, you can remove cloud provider credentials . 4.5. Installing a cluster with the support for configuring multi-architecture compute machines An OpenShift Container Platform cluster with multi-architecture compute machines supports compute machines with different architectures. Note When you have nodes with multiple architectures in your cluster, the architecture of your image must be consistent with the architecture of the node. You must ensure that the pod is assigned to the node with the appropriate architecture and that it matches the image architecture. For more information on assigning pods to nodes, see Scheduling workloads on clusters with multi-architecture compute machines . You can install an AWS cluster with the support for configuring multi-architecture compute machines. After installing the AWS cluster, you can add multi-architecture compute machines to the cluster in the following ways: Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture. Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture. Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator". 4.5.1. Installing a cluster with multi-architecture support You can install a cluster with the support for configuring multi-architecture compute machines. Prerequisites You installed the OpenShift CLI ( oc ). You have the OpenShift Container Platform installation program. You downloaded the pull secret for your cluster. Procedure Check that the openshift-install binary is using the multi payload by running the following command: USD ./openshift-install version Example output ./openshift-install 4.18.0 built from commit abc123etc release image quay.io/openshift-release-dev/ocp-release@sha256:abc123wxyzetc release architecture multi default architecture amd64 The output must contain release architecture multi to indicate that the openshift-install binary is using the multi payload. Update the install-config.yaml file to configure the architecture for the nodes. Sample install-config.yaml file with multi-architecture configuration apiVersion: v1 baseDomain: example.openshift.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: arm64 2 name: master platform: {} replicas: 3 # ... 1 Specify the architecture of the worker node. You can set this field to either arm64 or amd64 . 2 Specify the control plane node architecture. You can set this field to either arm64 or amd64 . steps Deploying the cluster Additional resources Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator | [
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.11\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.11\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.11\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.11\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"./openshift-install version",
"./openshift-install 4.18.0 built from commit abc123etc release image quay.io/openshift-release-dev/ocp-release@sha256:abc123wxyzetc release architecture multi default architecture amd64",
"apiVersion: v1 baseDomain: example.openshift.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: arm64 2 name: master platform: {} replicas: 3"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_aws/user-provisioned-infrastructure |
Subsets and Splits