title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 268. Apache Pulsar Component | Chapter 268. Apache Pulsar Component Available as of Camel version 2.24 Maven users will need to add the following dependency to their pom.xml for this component. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-pulsar</artifactId> <!-- use the same version as your Camel core version --> <version>x.y.z</version> </dependency> 268.1. URI format pulsar:[persistent|non-persistent]://tenant/namespace/topic 268.2. Options The Apache Pulsar component supports 3 options, which are listed below. Name Description Default Type autoConfiguration (common) The pulsar autoconfiguration AutoConfiguration pulsarClient (common) The pulsar client PulsarClient resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Apache Pulsar endpoint is configured using URI syntax: with the following path and query parameters: 268.2.1. Path Parameters (1 parameters): Name Description Default Type topicUri The Topic's full URI path including type, tenant and namespace String 268.2.2. Query Parameters (11 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerName (consumer) Name of the consumer when subscription is EXCLUSIVE sole-consumer String consumerNamePrefix (consumer) Prefix to add to consumer names when a SHARED or FAILOVER subscription is used cons String consumerQueueSize (consumer) Size of the consumer queue - defaults to 10 10 int numberOfConsumers (consumer) Number of consumers - defaults to 1 1 int subscriptionName (consumer) Name of the subscription to use subscription String subscriptionType (consumer) Type of the subscription EXCLUSIVESHAREDFAILOVER, defaults to EXCLUSIVE EXCLUSIVE SubscriptionType exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern producerName (producer) Name of the producer default-producer String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 268.3. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.pulsar.enabled Whether to enable auto configuration of the pulsar component. This is enabled by default. Boolean camel.component.pulsar.pulsar-client The pulsar client. The option is a org.apache.pulsar.client.api.PulsarClient type. String camel.component.pulsar.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-pulsar</artifactId> <!-- use the same version as your Camel core version --> <version>x.y.z</version> </dependency>",
"pulsar:[persistent|non-persistent]://tenant/namespace/topic",
"pulsar:uri"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/pulsar-component |
Adding and accessing Red Hat OpenShift API Management | Adding and accessing Red Hat OpenShift API Management Red Hat OpenShift API Management 1 Adding and accessing Red Hat OpenShift API Management. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_openshift_api_management/1/html/adding_and_accessing_red_hat_openshift_api_management/index |
Chapter 3. Edit the Environment File | Chapter 3. Edit the Environment File The environment file contains the back end settings that you must configure. It also contains settings relevant to the deployment of the Shared File Systems service. For more information about environment files, see Environment Files in the Advanced Overcloud Customization guide. This release includes an integrated environment file to define a native CephFS back end, and it contains default settings used to deploy the Shared File Systems service. This file is located in the following location on the undercloud node: /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml Procedure Create an environment file to contain the required environmental settings: The following code snippet shows the default values used by director when it deploys the Shared File Systems service: 1 The parameter_defaults header signifies the start of your configuration. Specifically, it allows you to override default values set in resource_registry . This includes values set by OS::Tripleo::Services::ManilaBackendCephFs , which sets defaults for a CephFS back end. 2 With ManilaCephFSNativeDriverHandlesShareServers set to false , the driver will not handle the lifecycle of the share server. 3 ManilaCephFSNativeCephFSConfPath: sets the path to the configuration file of the Ceph cluster. 4 ManilaCephFSNativeCephFSAuthId: is the Ceph auth ID that the director will create for share access. | [
"~/templates/manila-cephfsnative-config.yaml",
"./home/stack/templates/manila-cephfsnative-config.yaml",
"parameter_defaults: # 1 ManilaCephFSNativeBackendName: cephfsnative ManilaCephFSNativeDriverHandlesShareServers: false # 2 ManilaCephFSNativeCephFSConfPath: '/etc/ceph/ceph.conf' # 3 ManilaCephFSNativeCephFSAuthId: 'manila' # 4 ManilaCephFSNativeCephFSClusterName: 'ceph' ManilaCephFSNativeCephFSEnableSnapshots: true"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/cephfs_back_end_guide_for_the_shared_file_system_service/edit-env-file |
Chapter 6. Replacing a primary host using new bricks | Chapter 6. Replacing a primary host using new bricks 6.1. Host replacement prerequisites Determine which node to use as the Ansible controller node (the node from which all Ansible playbooks are executed). Red Hat recommends using a healthy node in the same cluster as the failed node as the Ansible controller node. Power off all virtual machines in the cluster. Stop brick processes and unmount file systems on the failed host, to avoid file system inconsistency issues. Check which operating system is running on your hyperconverged hosts by running the following command: Install the same operating system on a replacement host. 6.2. Preparing the cluster for host replacement Verify host state in the Administrator Portal. Log in to the Red Hat Virtualization Administrator Portal. The host is listed as NonResponsive in the Administrator Portal. Virtual machines that previously ran on this host are in the Unknown state. Click Compute Hosts and click the Action menu (...). Click Confirm host has been rebooted and confirm the operation. Verify that the virtual machines are now listed with a state of Down . Update the SSH fingerprint for the failed node. Log in to the Ansible controller node as the root user. Remove the existing SSH fingerprint for the failed node. Copy the public key from the Ansible controller node to the freshly installed node. Verify that you can log in to all hosts in the cluster, including the Ansible controller node, using key-based SSH authentication without a password. Test access using all network addresses. The following example assumes that the Ansible controller node is host1 . Use ssh-copy-id to copy the public key to any host you cannot log into without a password using this method. 6.3. Creating the node_prep_inventory.yml file Define the replacement node in the node_prep_inventory.yml file. Procedure Familiarize yourself with your Gluster configuration. The configuration that you define in your inventory file must match the existing Gluster volume configuration. Use gluster volume info to check where your bricks should be mounted for each Gluster volume, for example: Back up the node_prep_inventory.yml file. Edit the node_prep_inventory.yml file to define your node preparation. See Appendix B, Understanding the node_prep_inventory.yml file for more information about this inventory file and its parameters. 6.4. Creating the node_replace_inventory.yml file Define your cluster hosts by creating a node_replacement_inventory.yml file. Procedure Back up the node_replace_inventory.yml file. Edit the node_replace_inventory.yml file to define your cluster. See Appendix C, Understanding the node_replace_inventory.yml file for more information about this inventory file and its parameters. 6.5. Executing the replace_node.yml playbook file The replace_node.yml playbook reconfigures a Red Hat Hyperconverged Infrastructure for Virtualization cluster to use a new node after an existing cluster node has failed. Procedure Execute the playbook. 6.6. Updating the cluster for a new primary host When you replace a failed host using a different FQDN, you need to update configuration in the cluster to use the replacement host. Procedure Change into the hc-ansible-deployment directory. Make a copy of the reconfigure_storage_inventory.yml file. Edit the reconfigure_storage_inventory.yml file to identify the following: hosts Two active hosts in the cluster that have been configured to host the Hosted Engine virtual machine. gluster_maintenance_old_node The backend network FQDN of the failed node. gluster_maintenance_new_node The backend network FQDN of the replacement node. ovirt_engine_hostname The FQDN of the Hosted Engine virtual machine. For example: Execute the reconfigure_he_storage.yml playbook with your updated inventory file. 6.7. Removing a failed host from the cluster When a replacement host is ready, remove the existing failed host from the cluster. Procedure Remove the failed host. Log in into the Administrator Portal. Click Compute Hosts . The failed host is in the NonResponsive state. Virtual machines running on the failed host are in the Unknown state. Select the failed host. Click the main Action menu (...) for the Hosts page and select Confirm host has been rebooted . Click OK to confirm the operation. Virtual machines move to the Down state. Select the failed host and click Management Maintenance . Click the Action menu (...) beside the failed host and click Remove . Update the storage domains. For each storage domain: Click Storage Domains . Click the storage domain name, then click Data Center Maintenance and confirm the operation. Click Manage Domain . Edit the Path field to match the new FQDN. Click OK . Note A dialog box with an Operation Cancelled error appears as a result of Bug 1853995 , but the path is updated as expected. Click the Action menu (...) beside the storage domain and click Activate . Add the replacement host to the cluster. Attach the gluster logical network to the replacement host. Restart all virtual machines. For highly available virtual machines, disable and re-enable high-availability. Click Compute Virtual Machines and select a virtual machine. Click Edit High Availability uncheck the High Availability check box and click OK . Click Edit High Availability check the High Availability check box and click OK . Start all the virtual machines. Click Compute Virtual Machines and select a virtual machine. Click the Action menu (...) Start . 6.8. Verifying healing in progress After replacing a failed host with a new host, verify that your storage is healing as expected. Procedure Verify that healing is in progress. Run the following command on any hyperconverged host: The output shows a summary of healing activity on each brick in each volume, for example: Depending on brick size, volumes can take a long time to heal. You can still run and migrate virtual machines using this node while the underlying storage heals. | [
"pkill glusterfsd umount /gluster_bricks/{engine,vmstore,data}",
"nodectl info",
"sed -i `/ failed-host-frontend.example.com /d` /root/.ssh/known_hosts sed -i `/ failed-host-backend.example.com /d` /root/.ssh/known_hosts",
"ssh-copy-id root@ new-host-backend.example.com ssh-copy-id root@ new-host-frontend.example.com",
"ssh root@ host1-backend.example.com ssh root@ host1-frontend.example.com ssh root@ host2-backend.example.com ssh root@ host2-frontend.example.com ssh root@ new-host-backend.example.com ssh root@ new-host-frontend.example.com",
"ssh-copy-id root@ host-frontend.example.com ssh-copy-id root@ host-backend.example.com",
"gluster volume info engine | grep -i brick Number of Bricks: 1 x 3 = 3 Bricks: Brick1: host1.example.com:/gluster_bricks/engine/engine Brick2: host2.example.com:/gluster_bricks/engine/engine Brick3: host3.example.com:/gluster_bricks/engine/engine",
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment cp node_prep_inventory.yml node_prep_inventory.yml.bk",
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment cp node_replace_inventory.yml node_replace_inventory.yml.bk",
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/ ansible-playbook -i node_prep_inventory.yml -i node_replace_inventory.yml tasks/replace_node.yml",
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/",
"cp reconfigure_storage_inventory.yml reconfigure_storage_inventory.yml.bk",
"all: hosts: host2-backend.example.com : host3-backend.example.com : vars: gluster_maintenance_old_node: host1-backend.example.com gluster_maintenance_new_node: host4-backend.example.com ovirt_engine_hostname: engine.example.com",
"ansible-playbook -i reconfigure_he_storage_inventory.yml tasks/reconfigure_he_storage.yml",
"for vol in `gluster volume list`; do gluster volume heal USDvol info summary; done",
"Brick brick1 Status: Connected Total Number of entries: 3 Number of entries in heal pending: 2 Number of entries in split-brain: 1 Number of entries possibly healing: 0"
]
| https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/replacing-hosts_diff-fqdn-primary |
Chapter 7. Booting the installation media | Chapter 7. Booting the installation media You can boot the Red Hat Enterprise Linux installation using a USB or DVD. You can register RHEL using the Red Hat Content Delivery Network (CDN). CDN is a geographically distributed series of web servers. These servers provide, for example, packages and updates to RHEL hosts with a valid subscription. During the installation, registering and installing RHEL from the CDN offers following benefits: Utilizing the latest packages for an up-to-date system immediately after installation and Integrated support for connecting to Red Hat Insights and enabling System Purpose. Prerequisite You have created a bootable installation media (USB or DVD). Procedure Power off the system to which you are installing Red Hat Enterprise Linux. Disconnect any drives from the system. Power on the system. Insert the bootable installation media (USB, DVD, or CD). Power off the system but do not remove the boot media. Power on the system. You might need to press a specific key or combination of keys to boot from the media or configure the Basic Input/Output System (BIOS) of your system to boot from the media. For more information, see the documentation that came with your system. The Red Hat Enterprise Linux boot window opens and displays information about a variety of available boot options. Use the arrow keys on your keyboard to select the boot option that you require, and press Enter to select the boot option. The Welcome to Red Hat Enterprise Linux window opens and you can install Red Hat Enterprise Linux using the graphical user interface. The installation program automatically begins if no action is performed in the boot window within 60 seconds. Optional: Edit the available boot options: UEFI-based systems: Press E to enter edit mode. Change the predefined command line to add or remove boot options. Press Enter to confirm your choice. BIOS-based systems: Press the Tab key on your keyboard to enter edit mode. Change the predefined command line to add or remove boot options. Press Enter to confirm your choice. Additional Resources Customizing the system in the installer Boot options reference | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/booting-the-installer-from-local-media_rhel-installer |
Chapter 33. Compiler and Tools | Chapter 33. Compiler and Tools Multiple bugs when booting from SAN over FCoE Multiple bugs have arisen from the current implementation of boot from Storage Area Network (SAN) using Fibre Channel over Ethernet (FCoE). Red Hat is targeting a future release of Red Hat Enterprise Linux 7 for the fixes for these bugs. For a list of the affected bugs and workarounds (where available), please contact your Red Hat support representative. Valgrind cannot run programs built against an earlier version of Open MPI Red Hat Enterprise Linux 7.2 supports only the Open MPI application binary interface (ABI) in version 1.10, which is incompatible with the previously shipped 1.6 version of the Open MPI ABI. As a consequence, programs that are built against the earlier version of Open MPI cannot be run under Valgrind included in Red Hat Enterprise Linux 7.2. To work around this problem, use the Red Hat Developer Toolset version of Valgrind for programs linked against Open MPI version 1.6. Synthetic functions generated by GCC confuse SystemTap A GCC optimization can generate synthetic functions for partially inlined copies of other functions. These synthetic functions look like first-class functions and confuse tools such as SystemTap and GDB because SystemTap probes can be placed on both synthetic and real function entry points. This can result in multiple SystemTap probe hits per a single underlying function call. To work around this problem, a SystemTap script may need to adopt countermeasures, such as detecting recursion and suppressing probes related to inlined partial functions. For example, the following script: probe kernel.function("can_nice").call { } could attempt to avoid the described problem as follows: global in_can_nice% probe kernel.function("can_nice").call { in_can_nice[tid()] ++; if (in_can_nice[tid()] > 1) { } /* real probe handler here */ } probe kernel.function("can_nice").return { in_can_nice[tid()] --; } Note that this script does not take into account all possible scenarios. It would not work as expected in case of, for example, missed kprobes or kretprobes, or genuine intended recursion. SELinux AVC generated when ABRT collects backtraces If the new, optional ABRT feature that allows collecting backtraces from crashed processes without the need to write a core-dump file to disk is enabled (using the CreateCoreBacktrace option in the /etc/abrt/plugins/CCpp.conf configuration file), an SELinux AVC message is generated when the abrt-hook-ccpp tool tries to use the sigchld access on a crashing process in order to get the list of functions on the process' stack. GDB keeps watchpoints active even after reporting them as hit In some cases, on the 64-bit ARM architecture, GDB can incorrectly keep watchpoints active even after reporting them as hit. This results in the watchpoints getting hit for the second time, only this time the hardware indication is no longer recognized as a watchpoint and is printed as a generic SIGTRAP signal instead. There are several ways to work around this problem and stop the excessive SIGTRAP reporting. * Type continue when seeing a SIGTRAP after a watchpoint has been hit. * Instruct GDB to ignore the SIGTRAP signal by adding the following line to your ~/.gdbinit configuration file: handle SIGTRAP nostop noprint * Use software watchpoints instead of their hardware equivalents. Note that the debugging is significantly slower with software watchpoints, and only the watch command is available (not rwatch or awatch ). Add the following line to your ~/.gdbinit configuration file: set can-use-hw-watchpoints 0 Booting fails using grubaa64.efi Due to issues in pxeboot or the PXE configuration file, installing Red Hat Enterprise Linux 7.2 using the 7.2 grubaa64.efi boot loader either fails or experiences significant delay in booting the operating system. As a workaround, use the 7.1 grubaa64.efi file instead of the 7.2 grubaa64.efi file when installing Red Hat Enterprise Linux 7.2. MPX feature in GCC requires Red Hat Developer Toolset version of the libmpx library The libmpxwrappers library is missing in the gcc-libraries version of the libmpx library. As a consequence, the Memory Protection Extensions (MPX) feature might not work correctly in GCC, and the application might not link properly. To work around this problem, use the Red Hat Developer Toolset 4.0 version of the libmpx library. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/known-issues-compiler_and_tools |
Chapter 5. Downloading the test plan from Red Hat Certification Portal | Chapter 5. Downloading the test plan from Red Hat Certification Portal Procedure Log in to Red Hat Certification portal . Search for the case number related to your product certification, and copy it. Click Cases enter the product case number. Optional: To list the components that are tested during the test run, click Test Plans . Click Download Test Plan . steps If you plan to use Cockpit to run the tests, see Configuring the systems and running tests by using Cockpit . If you plan to use CLI to run the tests, see Configuring the systems and running tests by using CLI . | null | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_workflow/proc_downloading-the-test-plan-from-RHCert-Connect_cloud-instance-wf-setting-test-environment |
9.3.3. Domain vCPU Threads | 9.3.3. Domain vCPU Threads In addition to tuning domain processes, libvirt also permits the setting of the pinning policy for each vcpu thread in XML configuration. This is done inside the <cputune> tags: <cputune> <vcpupin vcpu="0" cpuset="1-4,^2"/> <vcpupin vcpu="1" cpuset="0,1"/> <vcpupin vcpu="2" cpuset="2,3"/> <vcpupin vcpu="3" cpuset="0,4"/> </cputune> In this tag, libvirt uses either cgroup or sched_setaffinity(2) to pin the vcpu thread to the specified cpuset. Note For more details on cputune, refer to the following URL: http://libvirt.org/formatdomain.html#elementsCPUTuning In addition, if you need to set up a virtual machines with more vCPU than a single NUMA node, configure the host so that the guest detects a NUMA topology on the host. This allows for 1:1 mapping of CPUs, memory, and NUMA nodes. For example, this can be applied with a guest with 4 vCPUs and 6 GB memory, and a host with the following NUMA settings: In this scenario, use the following Domain XML setting: <cputune> <vcpupin vcpu="0" cpuset="1"/> <vcpupin vcpu="1" cpuset="5"/> <vcpupin vcpu="2" cpuset="2"/> <vcpupin vcpu="3" cpuset="6"/> </cputune> <numatune> <memory mode="strict" nodeset="1-2"/> </numatune> <cpu> <numa> <cell id="0" cpus="0-1" memory="3" unit="GiB"/> <cell id="1" cpus="2-3" memory="3" unit="GiB"/> </numa> </cpu> | [
"<cputune> <vcpupin vcpu=\"0\" cpuset=\"1-4,^2\"/> <vcpupin vcpu=\"1\" cpuset=\"0,1\"/> <vcpupin vcpu=\"2\" cpuset=\"2,3\"/> <vcpupin vcpu=\"3\" cpuset=\"0,4\"/> </cputune>",
"4 available nodes (0-3) Node 0: CPUs 0 4, size 4000 MiB Node 1: CPUs 1 5, size 3999 MiB Node 2: CPUs 2 6, size 4001 MiB Node 3: CPUs 0 4, size 4005 MiB",
"<cputune> <vcpupin vcpu=\"0\" cpuset=\"1\"/> <vcpupin vcpu=\"1\" cpuset=\"5\"/> <vcpupin vcpu=\"2\" cpuset=\"2\"/> <vcpupin vcpu=\"3\" cpuset=\"6\"/> </cputune> <numatune> <memory mode=\"strict\" nodeset=\"1-2\"/> </numatune> <cpu> <numa> <cell id=\"0\" cpus=\"0-1\" memory=\"3\" unit=\"GiB\"/> <cell id=\"1\" cpus=\"2-3\" memory=\"3\" unit=\"GiB\"/> </numa> </cpu>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-numa-numa_and_libvirt-domain_vcpu_threads |
Chapter 14. Uninstalling a cluster on GCP | Chapter 14. Uninstalling a cluster on GCP You can remove a cluster that you deployed to Google Cloud Platform (GCP). 14.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with user-provisioned infrastructure clusters. There might be resources that the installation program did not create or that the installation program is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 14.2. Deleting GCP resources with the Cloud Credential Operator utility To clean up resources after uninstalling an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in manual mode with GCP Workload Identity, you can use the CCO utility ( ccoctl ) to remove the GCP resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Install an OpenShift Container Platform cluster with the CCO in manual mode with GCP Workload Identity. Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract --credentials-requests \ --cloud=gcp \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1 USDRELEASE_IMAGE 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Delete the GCP resources that ccoctl created: USD ccoctl gcp delete \ --name=<name> \ 1 --project=<gcp_project_id> \ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <gcp_project_id> is the GCP project ID in which to delete cloud resources. Verification To verify that the resources are deleted, query GCP. For more information, refer to GCP documentation. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --credentials-requests --cloud=gcp --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 USDRELEASE_IMAGE",
"ccoctl gcp delete --name=<name> \\ 1 --project=<gcp_project_id> \\ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_gcp/uninstalling-cluster-gcp |
Chapter 6. Configuring a GFS2 File System in a Pacemaker Cluster | Chapter 6. Configuring a GFS2 File System in a Pacemaker Cluster The following procedure is an outline of the steps required to set up a Pacemaker cluster that includes a GFS2 file system. On each node in the cluster, install the High Availability and Resilient Storage packages. Create the Pacemaker cluster and configure fencing for the cluster. For information on configuring a Pacemaker cluster, see Configuring the Red Hat High Availability Add-On with Pacemaker . On each node in the cluster, enable the clvmd service. If you will be using cluster-mirrored volumes, enable the cmirrord service. After you enable these daemons, when starting and stopping Pacemaker or the cluster through normal means using pcs cluster start , pcs cluster stop , service pacemaker start , or service pacemaker stop , the clvmd and cmirrord daemons will be started and stopped as needed. On one node in the cluster, perform the following steps: Set the global Pacemaker parameter no_quorum_policy to freeze . Note By default, the value of no-quorum-policy is set to stop , indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost. To address this situation, you can set the no-quorum-policy=freeze when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained. After ensuring that the locking type is set to 3 in the /etc/lvm/lvm.conf file to support clustered locking, Create the clustered LV and format the volume with a GFS2 file system. Ensure that you create enough journals for each of the nodes in your cluster. Configure a clusterfs resource. You should not add the file system to the /etc/fstab file because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with options= options . Run the pcs resource describe Filesystem command for full configuration options. This cluster resource creation command specifies the noatime mount option. Verify that GFS2 is mounted as expected. (Optional) Reboot all cluster nodes to verify GFS2 persistence and recovery. | [
"yum groupinstall 'High Availability' 'Resilient Storage'",
"chkconfig clvmd on chkconfig cmirrord on",
"pcs property set no-quorum-policy=freeze",
"pvcreate /dev/vdb vgcreate -Ay -cy cluster_vg /dev/vdb lvcreate -L5G -n cluster_lv cluster_vg mkfs.gfs2 -j2 -p lock_dlm -t rhel7-demo:gfs2-demo /dev/cluster_vg/cluster_lv",
"pcs resource create clusterfs Filesystem device=\"/dev/cluster_vg/cluster_lv\" directory=\"/var/mountpoint\" fstype=\"gfs2\" \"options=noatime\" op monitor interval=10s on-fail=fence clone interleave=true",
"mount |grep /mnt/gfs2-demo /dev/mapper/cluster_vg-cluster_lv on /mnt/gfs2-demo type gfs2 (rw,noatime,seclabel)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/ch-clustsetup-GFS2 |
Chapter 45. Managing hosts using Ansible playbooks | Chapter 45. Managing hosts using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for Identity Management (IdM), and you can use Ansible modules to automate host management. The following concepts and operations are performed when managing hosts and host entries using Ansible playbooks: Ensuring the presence of IdM host entries that are only defined by their FQDNs Ensuring the presence of IdM host entries with IP addresses Ensuring the presence of multiple IdM host entries with random passwords Ensuring the presence of an IdM host entry with multiple IP addresses Ensuring the absence of IdM host entries 45.1. Ensuring the presence of an IdM host entry with FQDN using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are only defined by their fully-qualified domain names (FQDNs). Specifying the FQDN name of the host is enough if at least one of the following conditions applies: The IdM server is not configured to manage DNS. The host does not have a static IP address or the IP address is not known at the time the host is configured. Adding a host defined only by an FQDN essentially creates a placeholder entry in the IdM DNS service. For example, laptops may be preconfigured as IdM clients, but they do not have IP addresses at the time they are configured. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the FQDN of the host whose presence in IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/add-host.yml file: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. 45.2. Ensuring the presence of an IdM host entry with DNS information using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are defined by their fully-qualified domain names (FQDNs) and their IP addresses. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. In addition, if the IdM server is configured to manage DNS and you know the IP address of the host, specify a value for the ip_address parameter. The IP address is necessary for the host to exist in the DNS resource records. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-present.yml file. You can also include other, additional information: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms host01.idm.example.com exists in IdM. 45.3. Ensuring the presence of multiple IdM host entries with random passwords using Ansible playbooks The ipahost module allows the system administrator to ensure the presence or absence of multiple host entries in IdM using just one Ansible task. Follow this procedure to ensure the presence of multiple host entries that are only defined by their fully-qualified domain names (FQDNs). Running the Ansible playbook generates random passwords for the hosts. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the hosts whose presence in IdM you want to ensure. To make the Ansible playbook generate a random password for each host even when the host already exists in IdM and update_password is limited to on_create , add the random: true and force: true options. To simplify this step, you can copy and modify the example from the /usr/share/doc/ansible-freeipa/README-host.md Markdown file: Run the playbook: Note To deploy the hosts as IdM clients using random, one-time passwords (OTPs), see Authorization options for IdM client enrollment using an Ansible playbook or Installing a client by using a one-time password: Interactive installation . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of one of the hosts: The output confirms host01.idm.example.com exists in IdM with a random password. 45.4. Ensuring the presence of an IdM host entry with multiple IP addresses using Ansible playbooks Follow this procedure to ensure the presence of a host entry in Identity Management (IdM) using Ansible playbooks. The host entry is defined by its fully-qualified domain name (FQDN) and its multiple IP addresses. Note In contrast to the ipa host utility, the Ansible ipahost module can ensure the presence or absence of several IPv4 and IPv6 addresses for a host. The ipa host-mod command cannot handle IP addresses. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file. Specify, as the name of the ipahost variable, the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. Specify each of the multiple IPv4 and IPv6 ip_address values on a separate line by using the ip_address syntax. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-member-ipaddresses-present.yml file. You can also include additional information: Run the playbook: Note The procedure creates a host entry in the IdM LDAP server but does not enroll the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. To verify that the multiple IP addresses of the host exist in the IdM DNS records, enter the ipa dnsrecord-show command and specify the following information: The name of the IdM domain The name of the host The output confirms that all the IPv4 and IPv6 addresses specified in the playbook are correctly associated with the host01.idm.example.com host entry. 45.5. Ensuring the absence of an IdM host entry using Ansible playbooks Follow this procedure to ensure the absence of host entries in Identity Management (IdM) using Ansible playbooks. Prerequisites IdM administrator credentials Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose absence from IdM you want to ensure. If your IdM domain has integrated DNS, use the updatedns: true option to remove the associated records of any kind for the host from the DNS. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/delete-host.yml file: Run the playbook: Note The procedure results in: The host not being present in the IdM Kerberos realm. The host entry not being present in the IdM LDAP server. To remove the specific IdM configuration of system services, such as System Security Services Daemon (SSSD), from the client host itself, you must run the ipa-client-install --uninstall command on the client. For details, see Uninstalling an IdM client . Verification Log into ipaserver as admin: Display information about host01.idm.example.com : The output confirms that the host does not exist in IdM. 45.6. Additional resources See the /usr/share/doc/ansible-freeipa/README-host.md Markdown file. See the additional playbooks in the /usr/share/doc/ansible-freeipa/playbooks/host directory. | [
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com state: present force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host01.idm.example.com is present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com description: Example host ip_address: 192.168.0.123 locality: Lab ns_host_location: Lab ns_os_version: CentOS 7 ns_hardware_platform: Lenovo T61 mac_address: - \"08:00:27:E3:B1:2D\" - \"52:54:00:BD:97:1E\" state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Description: Example host Locality: Lab Location: Lab Platform: Lenovo T61 Operating system: CentOS 7 Principal name: host/[email protected] Principal alias: host/[email protected] MAC address: 08:00:27:E3:B1:2D, 52:54:00:BD:97:1E Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure hosts with random password hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Hosts host01.idm.example.com and host02.idm.example.com present with random passwords ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" hosts: - name: host01.idm.example.com random: true force: true - name: host02.idm.example.com random: true force: true register: ipahost",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-are-present.yml [...] TASK [Hosts host01.idm.example.com and host02.idm.example.com present with random passwords] changed: [r8server.idm.example.com] => {\"changed\": true, \"host\": {\"host01.idm.example.com\": {\"randompassword\": \"0HoIRvjUdH0Ycbf6uYdWTxH\"}, \"host02.idm.example.com\": {\"randompassword\": \"5VdLgrf3wvojmACdHC3uA3s\"}}}",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Password: True Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host member IP addresses present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host101.example.com IP addresses present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com ip_address: - 192.168.0.123 - fe80::20c:29ff:fe02:a1b3 - 192.168.0.124 - fe80::20c:29ff:fe02:a1b4 force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-with-multiple-IP-addreses-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"ipa dnsrecord-show idm.example.com host01 [...] Record name: host01 A record: 192.168.0.123, 192.168.0.124 AAAA record: fe80::20c:29ff:fe02:a1b3, fe80::20c:29ff:fe02:a1b4",
"[ipaserver] server.idm.example.com",
"--- - name: Host absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com absent ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com updatedns: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa host-show host01.idm.example.com ipa: ERROR: host01.idm.example.com: host not found"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-hosts-using-Ansible-playbooks_configuring-and-managing-idm |
Monitoring APIs | Monitoring APIs OpenShift Container Platform 4.15 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/monitoring_apis/index |
Chapter 4. Analyzing and triaging your compliance reports | Chapter 4. Analyzing and triaging your compliance reports The compliance service displays data for each policy and system registered (and reporting data) to the service. This can be a lot of data, most of which might not be relevant to your immediate goals. The following sections discuss ways to refine the bulk of compliance service data- in Reports, SCAP policies, and Systems- to focus on the systems or policies that matter the most to you. The compliance service enables users to set filters on lists of systems, rules, and policies. Like other Insights for Red Hat Enterprise Linux services, the compliance service also enables filtering by system-group tags. However, because compliance-registered systems use a different reporting mechanism, the tag filters must be set directly in lists of systems in the compliance UI views, rather than from the global, Filter by status dropdown used elsewhere in the Insights application. Important To see accurate data for your systems, always run insights-client --compliance on each system prior to viewing the results in the UI. 4.1. Compliance reports From Security > Compliance > Reports , use the following primary and secondary filters to focus on a specific or narrow set of reports: Policy name. Search for a policy by name. Policy type. Select from the policy types configured for your infrastructure in the compliance service. Operating system. Select one or more RHEL OS major versions. Systems meeting compliance. Show policies for which a percentage (range) of included systems are compliant. 4.2. SCAP policies From Security > Compliance > SCAP policies , use the Filter by name search box to locate a specific policy by name. Then click on the policy name to see the policy card, which includes the following information: Details. View details such as compliance threshold, business objective, OS, and SSG version. Rules. View and filter the rules included in the specific SSG version of the policy by Name, Severity and Remediation available. Then sort the results by Rule name, Severity or Ansible Playbook support. Systems. Search by system name to locate a specific system associated with the policy then click the system name to see more information about that system and issues that may affect it. 4.3. Systems The default functionality on Security > Compliance > Systems is to search by system name. Tags. Search by system group or tag name. Name. Search by system name. Policy. Search by policy name and see the systems included in that policy. Operating system. Search by RHEL OS major versions to see only RHEL 7 or RHEL 8 systems. 4.4. Searching The search function in the compliance service works in the context of the page you are viewing. SCAP Policies. Search for a specific policy by name. Systems. Search by system name, policy, or Red Hat Enterprise Linux operating system major version. Rules list (single system). The rules list search function allows you to search by the rule name or identifier. Identifiers are shown directly below the rule name. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_policy_compliance_of_rhel_systems/compliance-understanding-reporting_compliance-managing-policies |
SystemTap Tapset Reference | SystemTap Tapset Reference Red Hat Enterprise Linux 6 For SystemTap in Red Hat Enterprise Linux 6 Red Hat, Inc. Robert Kratky Red Hat Customer Content Services [email protected] William Cohen Red Hat Performance Tools Don Domingo Red Hat Customer Content Services Edited by Jacquelynn East Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/index |
Chapter 36. Setting the priority for a process with the chrt utility | Chapter 36. Setting the priority for a process with the chrt utility You can set the priority for a process using the chrt utility. Prerequisites You have administrator privileges. 36.1. Setting the process priority using the chrt utility The chrt utility checks and adjusts scheduler policies and priorities. It can start new processes with the desired properties, or change the properties of a running process. Procedure To set the scheduling policy of a process, run the chrt command with the appropriate command options and parameters. In the following example, the process ID affected by the command is 1000 , and the priority ( -p ) is 50 . To start an application with a specified scheduling policy and priority, add the name of the application, and the path to it, if necessary, along with the attributes. For more information about the chrt utility options, see The chrt utility options . 36.2. The chrt utility options The chrt utility options include command options and parameters specifying the process and priority for the command. Policy options -f Sets the scheduler policy to SCHED_FIFO . -o Sets the scheduler policy to SCHED_OTHER . -r Sets the scheduler policy to SCHED_RR (round robin). -d Sets the scheduler policy to SCHED_DEADLINE . -p n Sets the priority of the process to n . When setting a process to SCHED_DEADLINE, you must specify the runtime , deadline , and period parameters. For example: where --sched-runtime 5000000 is the run time in nanoseconds. --sched-deadline 10000000 is the relative deadline in nanoseconds. --sched-period 16666666 is the period in nanoseconds. 0 is a placeholder for unused priority required by the chrt command. 36.3. Additional resources chrt(1) man page on your system | [
"chrt -f -p 50 1000",
"chrt -r -p 50 /bin/my-app",
"chrt -d --sched-runtime 5000000 --sched-deadline 10000000 --sched-period 16666666 0 video_processing_tool"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_setting-the-priority-for-a-process-with-the-chrt-utility_optimizing-rhel9-for-real-time-for-low-latency-operation |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/integrating_an_overcloud_with_an_existing_red_hat_ceph_storage_cluster/making-open-source-more-inclusive |
Chapter 2. Using Ansible roles to automate repetitive tasks on clients | Chapter 2. Using Ansible roles to automate repetitive tasks on clients 2.1. Assigning Ansible roles to an existing host You can use Ansible roles for remote management of Satellite clients. Prerequisites Ensure that you have configured and imported Ansible roles. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the host and click Edit . On the Ansible Roles tab, select the role that you want to add from the Available Ansible Roles list. Click the + icon to add the role to the host. You can add more than one role. Click Submit . After you assign Ansible roles to hosts, you can use Ansible for remote execution. For more information, see Section 4.13, "Distributing SSH keys for remote execution" . Overriding parameter variables On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Ansible Playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter . 2.2. Removing Ansible roles from a host Use the following procedure to remove Ansible roles from a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the host and click Edit . Select the Ansible Roles tab. In the Assigned Ansible Roles area, click the - icon to remove the role from the host. Repeat to remove more roles. Click Submit . 2.3. Changing the order of Ansible roles Use the following procedure to change the order of Ansible roles applied to a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. Select the Ansible Roles tab. In the Assigned Ansible Roles area, you can change the order of the roles by dragging and dropping the roles into the preferred position. Click Submit to save the order of the Ansible roles. 2.4. Running Ansible roles on a host You can run Ansible roles on a host through the Satellite web UI. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . You must have assigned the Ansible roles to the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host that contains the Ansible role you want to run. From the Select Action list, select Run all Ansible roles . You can view the status of your Ansible job on the Run Ansible roles page. To rerun a job, click Rerun . 2.5. Assigning Ansible roles to a host group You can use Ansible roles for remote management of Satellite clients. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . Procedure In the Satellite web UI, navigate to Configure > Host Groups . Click the host group name to which you want to assign an Ansible role. On the Ansible Roles tab, select the role that you want to add from the Available Ansible Roles list. Click the + icon to add the role to the host group. You can add more than one role. Click Submit . 2.6. Running Ansible roles on a host group You can run Ansible roles on a host group through the Satellite web UI. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . You must have assigned the Ansible roles to the host group. You must have at least one host in your host group. Procedure In the Satellite web UI, navigate to Configure > Host Groups . From the list in the Actions column for the host group, select Run all Ansible roles . You can view the status of your Ansible job on the Run Ansible roles page. Click Rerun to rerun a job. 2.7. Running Ansible roles in check mode You can run Ansible roles in check mode through the Satellite web UI. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . You must have assigned the Ansible roles to the host group. You must have at least one host in your host group. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Edit for the host you want to enable check mode for. In the Parameters tab, ensure that the host has a parameter named ansible_roles_check_mode with type boolean set to true . Click Submit . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_ansible_integration/Using_Ansible_Roles_to_Automate_Repetitive_Tasks_on_Clients_ansible |
Chapter 1. Preparing to install on GCP | Chapter 1. Preparing to install on GCP 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on GCP Before installing OpenShift Container Platform on Google Cloud Platform (GCP), you must create a service account and configure a GCP project. See Configuring a GCP project for details about creating a project, enabling API services, configuring DNS, GCP account limits, and supported GCP regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating long-term credentials for GCP for other options. 1.3. Choosing a method to install OpenShift Container Platform on GCP You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on GCP : You can install OpenShift Container Platform on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on GCP : You can install a customized cluster on GCP infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on GCP with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on GCP in a restricted network : You can install OpenShift Container Platform on GCP on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the GCP APIs. Installing a cluster into an existing Virtual Private Cloud : You can install OpenShift Container Platform on an existing GCP Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits on creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing GCP VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on GCP infrastructure that you provision, by using one of the following methods: Installing a cluster on GCP with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP infrastructure that you provide. You can use the provided Deployment Manager templates to assist with the installation. Installing a cluster with shared VPC on user-provisioned infrastructure in GCP : You can use the provided Deployment Manager templates to create GCP resources in a shared VPC infrastructure. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP in a restricted network with user-provisioned infrastructure. By creating an internal mirror of the installation release content, you can install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 1.4. steps Configuring a GCP project | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_gcp/preparing-to-install-on-gcp |
2.5. Configuring a High Availability Application | 2.5. Configuring a High Availability Application After creating a cluster and configuring fencing for the nodes in the cluster, you define and configure the components of the high availability service you will run on the cluster. To complete your cluster setup, perform the following steps. Configure shared storage and file systems required by your application. For information on high availability logical volumes, see Appendix F, High Availability LVM (HA-LVM) . For information on the GFS2 clustered file system, see the Global File System 2 manual. Optionally, you can customize your cluster's behavior by configuring a failover domain. A failover domain determines which cluster nodes an application will run on in what circumstances, determined by a set of failover domain configuration options. For information on failover domain options and how they determine a cluster's behavior, see the High Availability Add-On Overview . For information on configuring failover domains, see Section 4.8, "Configuring a Failover Domain" . Configure cluster resources for your system. Cluster resources are the individual components of the applications running on a cluster node. For information on configuring cluster resources, see Section 4.9, "Configuring Global Cluster Resources" . Configure the cluster services for your cluster. A cluster service is the collection of cluster resources required by an application running on a cluster node that can fail over to another node in a high availability cluster. You can configure the startup and recovery policies for a cluster service, and you can configure resource trees for the resources that constitute the service, which determine startup and shutdown order for the resources as well as the relationships between the resources. For information on service policies, resource trees, service operations, and resource actions, see the High Availability Add-On Overview . For information on configuring cluster services, see Section 4.10, "Adding a Cluster Service to the Cluster" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-applicationconfig-ca |
Chapter 1. Understanding Red Hat Network Functions Virtualization (NFV) | Chapter 1. Understanding Red Hat Network Functions Virtualization (NFV) Network Functions Virtualization (NFV) is a software-based solution that helps the Communication Service Providers (CSPs) to move beyond the traditional, proprietary hardware to achieve greater efficiency and agility while reducing the operational costs. An NFV environment allows for IT and network convergence by providing a virtualized infrastructure using the standard virtualization technologies that run on standard hardware devices such as switches, routers, and storage to virtualize network functions (VNFs). The management and orchestration logic deploys and sustains these services. NFV also includes a Systems Administration, Automation and Life-Cycle Management thereby reducing the manual work necessary. 1.1. Advantages of NFV The main advantages of implementing network functions virtualization (NFV) are as follows: Accelerates the time-to-market by allowing you to to quickly deploy and scale new networking services to address changing demands. Supports innovation by enabling service developers to self-manage their resources and prototype using the same platform that will be used in production. Addresses customer demands in hours or minutes instead of weeks or days, without sacrificing security or performance. Reduces capital expenditure because it uses commodity-off-the-shelf hardware instead of expensive tailor-made equipment. Uses streamlined operations and automation that optimize day-to-day tasks to improve employee productivity and reduce operational costs. 1.2. Supported Configurations for NFV Deployments You can use the Red Hat OpenStack Platform director toolkit to isolate specific network types, for example, external, project, internal API, and so on. You can deploy a network on a single network interface, or distributed over a multiple-host network interface. With Open vSwitch you can create bonds by assigning multiple interfaces to a single bridge. Configure network isolation in a Red Hat OpenStack Platform installation with template files. If you do not provide template files, the service networks deploy on the provisioning network. There are two types of template configuration files: network-environment.yaml This file contains network details, such as subnets and IP address ranges, for the overcloud nodes. This file also contains the different settings that override the default parameter values for various scenarios. Host network templates, for example, compute.yaml and controller.yaml These templates define the network interface configuration for the overcloud nodes. The values of the network details are provided by the network-environment.yaml file. These heat template files are located at /usr/share/openstack-tripleo-heat-templates/ on the undercloud node. For samples of these heat template files for NFV, see Sample DPDK SR-IOV YAML files . The Hardware requirements and Software requirements sections provide more details on how to plan and configure the heat template files for NFV using the Red Hat OpenStack Platform director. You can edit YAML files to configure NFV. For an introduction to the YAML file format, see YAML in a Nutshell . Data Plane Development Kit (DPDK) and Single Root I/O Virtualization (SR-IOV) Red Hat OpenStack Platform (RHOSP) supports NFV deployments with the inclusion of automated OVS-DPDK and SR-IOV configuration. Important Red Hat does not support the use of OVS-DPDK for non-NFV workloads. If you need OVS-DPDK functionality for non-NFV workloads, contact your Technical Account Manager (TAM) or open a customer service request case to discuss a Support Exception and other options. To open a customer service request case, go to Create a case and choose Account > Customer Service Request . Hyper-converged Infrastructure (HCI) You can colocate the Compute sub-system with the Red Hat Ceph Storage nodes. This hyper-converged model delivers lower cost of entry, smaller initial deployment footprints, maximized capacity utilization, and more efficient management in NFV use cases. For more information about HCI, see the Hyperconverged Infrastructure Guide . Composable roles You can use composable roles to create custom deployments. Composable roles allow you to add or remove services from each role. For more information about the Composable Roles, see Composable services and custom roles . Open vSwitch (OVS) with LACP As of OVS 2.9, LACP with OVS is fully supported. This is not recommended for Openstack control plane traffic, as OVS or Openstack Networking interruptions might interfere with management. For more information, see Open vSwitch (OVS) bonding options . OVS Hardware offload Red Hat OpenStack Platform supports, with limitations, the deployment of OVS hardware offload. For information about deploying OVS with hardware offload, see Configuring OVS hardware offload . Open Virtual Network (OVN) The following NFV OVN configurations are available in RHOSP 16.1.4: Deploying OVN with OVS-DPDK and SR-IOV . Deploying OVN with OVS TC Flower offload . 1.3. NFV data plane connectivity With the introduction of NFV, more networking vendors are starting to implement their traditional devices as VNFs. While the majority of networking vendors are considering virtual machines, some are also investigating a container-based approach as a design choice. An OpenStack-based solution should be rich and flexible due to two primary reasons: Application readiness - Network vendors are currently in the process of transforming their devices into VNFs. Different VNFs in the market have different maturity levels; common barriers to this readiness include enabling RESTful interfaces in their APIs, evolving their data models to become stateless, and providing automated management operations. OpenStack should provide a common platform for all. Broad use-cases - NFV includes a broad range of applications that serve different use-cases. For example, Virtual Customer Premise Equipment (vCPE) aims at providing a number of network functions such as routing, firewall, virtual private network (VPN), and network address translation (NAT) at customer premises. Virtual Evolved Packet Core (vEPC), is a cloud architecture that provides a cost-effective platform for the core components of Long-Term Evolution (LTE) network, allowing dynamic provisioning of gateways and mobile endpoints to sustain the increased volumes of data traffic from smartphones and other devices. These use cases are implemented using different network applications and protocols, and require different connectivity, isolation, and performance characteristics from the infrastructure. It is also common to separate between control plane interfaces and protocols and the actual forwarding plane. OpenStack must be flexible enough to offer different datapath connectivity options. In principle, there are two common approaches for providing data plane connectivity to virtual machines: Direct hardware access bypasses the linux kernel and provides secure direct memory access (DMA) to the physical NIC using technologies such as PCI Passthrough or single root I/O virtualization (SR-IOV) for both Virtual Function (VF) and Physical Function (PF) pass-through. Using a virtual switch (vswitch) , implemented as a software service of the hypervisor. Virtual machines are connected to the vSwitch using virtual interfaces (vNICs), and the vSwitch is capable of forwarding traffic between virtual machines, as well as between virtual machines and the physical network. Some of the fast data path options are as follows: Single Root I/O Virtualization (SR-IOV) is a standard that makes a single PCI hardware device appear as multiple virtual PCI devices. It works by introducing Physical Functions (PFs), which are the fully featured PCIe functions that represent the physical hardware ports, and Virtual Functions (VFs), which are lightweight functions that are assigned to the virtual machines. To the VM, the VF resembles a regular NIC that communicates directly with the hardware. NICs support multiple VFs. Open vSwitch (OVS) is an open source software switch that is designed to be used as a virtual switch within a virtualized server environment. OVS supports the capabilities of a regular L2-L3 switch and also offers support to the SDN protocols such as OpenFlow to create user-defined overlay networks (for example, VXLAN). OVS uses Linux kernel networking to switch packets between virtual machines and across hosts using physical NIC. OVS now supports connection tracking (Conntrack) with built-in firewall capability to avoid the overhead of Linux bridges that use iptables/ebtables. Open vSwitch for Red Hat OpenStack Platform environments offers default OpenStack Networking (neutron) integration with OVS. Data Plane Development Kit (DPDK) consists of a set of libraries and poll mode drivers (PMD) for fast packet processing. It is designed to run mostly in the user-space, enabling applications to perform their own packet processing directly from or to the NIC. DPDK reduces latency and allows more packets to be processed. DPDK Poll Mode Drivers (PMDs) run in busy loop, constantly scanning the NIC ports on host and vNIC ports in guest for arrival of packets. DPDK accelerated Open vSwitch (OVS-DPDK) is Open vSwitch bundled with DPDK for a high performance user-space solution with Linux kernel bypass and direct memory access (DMA) to physical NICs. The idea is to replace the standard OVS kernel data path with a DPDK-based data path, creating a user-space vSwitch on the host that uses DPDK internally for its packet forwarding. The advantage of this architecture is that it is mostly transparent to users. The interfaces it exposes, such as OpenFlow, OVSDB, the command line, remain mostly the same. 1.4. ETSI NFV Architecture The European Telecommunications Standards Institute (ETSI) is an independent standardization group that develops standards for information and communications technologies (ICT) in Europe. Network functions virtualization (NFV) focuses on addressing problems involved in using proprietary hardware devices. With NFV, the necessity to install network-specific equipment is reduced, depending upon the use case requirements and economic benefits. The ETSI Industry Specification Group for Network Functions Virtualization (ETSI ISG NFV) sets the requirements, reference architecture, and the infrastructure specifications necessary to ensure virtualized functions are supported. Red Hat is offering an open-source based cloud-optimized solution to help the Communication Service Providers (CSP) to achieve IT and network convergence. Red Hat adds NFV features such as single root I/O virtualization (SR-IOV) and Open vSwitch with Data Plane Development Kit (OVS-DPDK) to Red Hat OpenStack. 1.5. NFV ETSI architecture and components In general, a network functions virtualization (NFV) platform has the following components: Figure 1.1. NFV ETSI architecture and components Virtualized Network Functions (VNFs) - the software implementation of routers, firewalls, load balancers, broadband gateways, mobile packet processors, servicing nodes, signalling, location services, and other network functions. NFV Infrastructure (NFVi) - the physical resources (compute, storage, network) and the virtualization layer that make up the infrastructure. The network includes the datapath for forwarding packets between virtual machines and across hosts. This allows you to install VNFs without being concerned about the details of the underlying hardware. NFVi forms the foundation of the NFV stack. NFVi supports multi-tenancy and is managed by the Virtual Infrastructure Manager (VIM). Enhanced Platform Awareness (EPA) improves the virtual machine packet forwarding performance (throughput, latency, jitter) by exposing low-level CPU and NIC acceleration components to the VNF. NFV Management and Orchestration (MANO) - the management and orchestration layer focuses on all the service management tasks required throughout the life cycle of the VNF. The main goals of MANO is to allow service definition, automation, error-correlation, monitoring, and life-cycle management of the network functions offered by the operator to its customers, decoupled from the physical infrastructure. This decoupling requires additional layers of management, provided by the Virtual Network Function Manager (VNFM). VNFM manages the life cycle of the virtual machines and VNFs by either interacting directly with them or through the Element Management System (EMS) provided by the VNF vendor. The other important component defined by MANO is the Orchestrator, also known as NFVO. NFVO interfaces with various databases and systems including Operations/Business Support Systems (OSS/BSS) on the top and the VNFM on the bottom. If the NFVO wants to create a new service for a customer, it asks the VNFM to trigger the instantiation of a VNF, which may result in multiple virtual machines. Operations and Business Support Systems (OSS/BSS) - provides the essential business function applications, for example, operations support and billing. The OSS/BSS needs to be adapted to NFV, integrating with both legacy systems and the new MANO components. The BSS systems set policies based on service subscriptions and manage reporting and billing. Systems Administration, Automation and Life-Cycle Management - manages system administration, automation of the infrastructure components and life cycle of the NFVi platform. 1.6. Red Hat NFV components Red Hat's solution for NFV includes a range of products that can act as the different components of the NFV framework in the ETSI model. The following products from the Red Hat portfolio integrate into an NFV solution: Red Hat OpenStack Platform - Supports IT and NFV workloads. The Enhanced Platform Awareness (EPA) features deliver deterministic performance improvements through CPU Pinning, Huge pages, Non-Uniform Memory Access (NUMA) affinity and network adaptors (NICs) that support SR-IOV and OVS-DPDK. Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host - Create virtual machines and containers as VNFs. Red Hat Ceph Storage - Provides the the unified elastic and high-performance storage layer for all the needs of the service provider workloads. Red Hat JBoss Middleware and OpenShift Enterprise by Red Hat - Optionally provide the ability to modernize the OSS/BSS components. Red Hat CloudForms - Provides a VNF manager and presents data from multiple sources, such as the VIM and the NFVi in a unified display. Red Hat Satellite and Ansible by Red Hat - Optionally provide enhanced systems administration, automation and life-cycle management. 1.7. NFV installation summary The Red Hat OpenStack Platform director installs and manages a complete OpenStack environment. The director is based on the upstream OpenStack TripleO project, which is an abbreviation for "OpenStack-On-OpenStack". This project takes advantage of the OpenStack components to install a fully operational OpenStack environment; this includes a minimal OpenStack node called the undercloud. The undercloud provisions and controls the overcloud (a series of bare metal systems used as the production OpenStack nodes). The director provides a simple method for installing a complete Red Hat OpenStack Platform environment that is both lean and robust. For more information on installing the undercloud and overcloud, see the Director Installation and Usage guide. To install the NFV features, complete the following additional steps: Include SR-IOV and PCI Passthrough parameters in your network-environment.yaml file, update the post-install.yaml file for CPU tuning, modify the compute.yaml file, and run the overcloud_deploy.sh script to deploy the overcloud. Install the DPDK libraries and drivers for fast packets processing by polling data directly from the NICs. Include the DPDK parameters in your network-environment.yaml file, update the post-install.yaml files for CPU tuning, update the compute.yaml file to set the bridge with DPDK port, update the controller.yaml file to set the bridge and an interface with VLAN configured, and run the overcloud_deploy.sh script to deploy the overcloud. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/understanding-nfv_rhosp-nfv |
6. Security | 6. Security OpenSCAP OpenSCAP is a set of open source libraries that support the Security Content Automation Protocol (SCAP) standards from the National Institute of Standards and Technology (NIST). OpenSCAP supports the SCAP components: Common Vulnerabilities and Exposures (CVE) Common Platform Enumeration (CPE) Common Configuration Enumeration (CCE) Common Vulnerability Scoring System (CVSS) Open Vulnerability and Assessment Language (OVAL) Extensible Configuration Checklist Description Format (XCCDF) Additionally, the openSCAP package includes an application to generate SCAP reports about system configuration. openSCAP is now a fully supported package in Red Hat Enterprise Linux 6.1. Smartcard support for SPICE The Simple Protocol for Independent Computing Environments (SPICE) is a remote display protocol designed for virtual environments. SPICE users can view a virtualized desktop or server from the local system or any system with network access to the server. Red Hat Enterprise Linux 6.1 introduces support for smartcard passthough via the SPICE protocol. Note The Security Guide assists users and administrators in learning the processes and practices of securing workstations and servers against local and remote intrusion, exploitation and malicious activity. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_release_notes/security |
2.48. RHEA-2011:0604 - new package: virt-what | 2.48. RHEA-2011:0604 - new package: virt-what A new virt-what package is now available for Red Hat Enterprise Linux 6. The virt-what tool is used to detect whether the operating system is running inside a virtual machine. This enhancement update adds a new virt-what package to Red Hat Enterprise Linux 6. The virt-what utility enables programs to detect if they are running in a virtual machine, as well as details about the type of hypervisor. (BZ# 627886 ) All users requiring virt-what should install this newly-released package, which adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/virt-what_new |
3.8. Re-enrolling a Client into the IdM Domain | 3.8. Re-enrolling a Client into the IdM Domain If a client virtual machine has been destroyed and you still have its keytab, you can re-enroll the client: Interactively, using administrator credentials. See Section 3.8.1, "Re-enrolling a Client Interactively Using the Administrator Account" . Non-interactively, using a previously backed-up keytab file. See Section 3.8.2, "Re-enrolling a Client Non-interactively Using the Client Keytab" . Note You can only re-enroll clients whose domain entry is still active. If you uninstalled a client (using ipa-client-install --uninstall ) or disabled its host entry (using ipa host-disable ), you cannot re-enroll it. During re-enrollment, IdM performs the following: Revokes the original host certificate Generates a new host certificate Creates new SSH keys Generates a new keytab 3.8.1. Re-enrolling a Client Interactively Using the Administrator Account Re-create the client machine with the same host name. Run the ipa-client-install --force-join command on the client machine: The script prompts for a user whose identity will be used to enroll the client. By default, this is the admin user: 3.8.2. Re-enrolling a Client Non-interactively Using the Client Keytab Re-enrollment using the client keytab is appropriate for automated installation or in other situations when using the administrator password is not feasible. Back up the original client's keytab file, for example in the /tmp or /root directory. Re-create the client machine with the same host name. Re-enroll the client, and specify the keytab location using the --keytab option: Note The keytab specified in the --keytab option is only used when authenticating to initiate the enrollment. During the re-enrollment, IdM generates a new keytab for the client. | [
"ipa-client-install --force-join",
"User authorized to enroll computers: admin Password for [email protected]",
"ipa-client-install --keytab /tmp/krb5.keytab"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/client-re-enrolling |
5.4. Managing Smart Cards | 5.4. Managing Smart Cards You can use the Manage Smart Cards page to perform many of the operations that can be applied to one of the cryptographic keys stored on the token. You can use this page to format the token, set and reset the card's password, and to display card information. Two other operations, enrolling tokens and viewing the diagnostic logs, are also accessed through the Manage Smart Cards page. These operations are addressed in other sections. Figure 5.1. Manage Smart Cards Page 5.4.1. Formatting the Smart Card When you format a smart card, it is reset to the uninitialized state. This removes all previously generated user key pairs and erases the password set on the smart card during enrollment. The TPS server can be configured to load newer versions of the applet and symmetric keys onto the card. The TPS supports the CoolKey applet which is shipped with Red Hat Enterprise Linux 6. To format a smart card: Insert a supported smart card into the computer. Ensure that the card is listed in the Active Smart Cards table. In the Smart Card Functions section of the Manage Smart Cards screen, click Format . If the TPS has been configured for user authentication, enter the user credentials in the authentication dialog, and click Submit . During the formatting process, the status of the card changes to BUSY and a progress bar is displayed. A success message is displayed when the formatting process is complete. Click OK to close the message box. When the formatting process is complete, the Active Smart Cards table shows the card status as UNINITIALIZED. 5.4.2. Resetting a Smart Card Password Insert a supported smart card into the computer. Ensure that the card is listed in the Active Smart Cards table. In the Smart Card Functions section of the Manage Smart Cards screen, click Reset Password to display the Password dialog. Enter a new smart card password in the Enter new password field. Confirm the new smart card password in the Re-Enter password field, and then click OK . If the TPS has been configured for user authentication, enter the user credentials in the authentication dialog, and click Submit . Wait for the password to finish being reset. 5.4.3. Viewing Certificates The Smart Card Manager can display basic information about a selected smart card, including stored keys and certificates. To view certificate information: Insert a supported smart card into the computer. Ensure that the card is listed in the Active Smart Cards table. Select the card from the list, and click View Certificates . This displays basic information about the certificates stored on the card, including the serial number, certificate nickname, and validity dates. To view more detailed information about a certificate, select the certificate from the list and click View . 5.4.4. Importing CA Certificates The XULRunner Gecko engine implements stringent controls over which SSL-based URLs can be visited by client like a browser or the Enterprise Security Client. If the Enterprise Security Client (through the XULRunner framework) does not trust a URL, the URL can not be visited. One way to trust an SSL-based URL is to import and trust the CA certificate chain of the CA which issued the certificates for the site. (The other is to create a trust security exception for the site, as in Section 5.4.5, "Adding Exceptions for Servers" .) Any CA which issues certificates for smart cards must be trusted by the Enterprise Security Client application, which means that its CA certificate must be imported into the Enterprise Security Client. Open the CA's end user pages in a web browser. Click the Retrieval tab at the top. In the left menu, click the Import CA Certificate Chain link. Choose the radio button to download the chain as a file, and remember the location and name of the downloaded file. Open the Smart Card Manager GUI. Click the View Certificates button. Click the Authorities tab. Click Import . Browse to the CA certificate chain file, and select it. When prompted, confirm that you want to trust the CA. 5.4.5. Adding Exceptions for Servers The XULRunner Gecko engine implements stringent controls over which SSL-based URLs can be visited by client like a browser or the Enterprise Security Client. If the Enterprise Security Client (through the XULRunner framework) does not trust a URL, the URL can not be visited. One way to trust an SSL-based URL is to create a trust security exception for the site, which imports the certificate for the site and forces the Enterprise Security Client to recognize it. (The other option is to import the CA certificate chain for the site and automatically trust it, as in Section 5.4.4, "Importing CA Certificates" .) The smart card can be used to access services or websites over SSL that require special security exceptions; these exceptions can be configured through the Enterprise Security Client, similar to configuring exceptions for websites in a browser like Mozilla Firefox. Open the Smart Card Manager UI. Click the View Certificates button. Click the Servers tab. Click Add Exception . Enter the URL, including any port numbers, for the site or service which the smart card will be used to access. Then click the Get Certificates button to download the server certificate for the site. Click Confirm Security Exception to add the site to the list of allowed sites. 5.4.6. Enrolling Smart Cards Most smart cards will be automatically enrolled using the automated enrollment procedure, described in Section 5.3, "Enrolling a Smart Card Automatically" . You can also use the Manage Smart Cards facility to manually enroll a smart card. If you enroll a token with the user key pairs, then the token can be used for certificate-based operations such as SSL client authentication and S/MIME. Note The TPS server can be configured to generate the user key pairs on the server and then archived in the DRM subsystem for recovery if the token is lost. To enroll a smart card manually: Insert a supported, unenrolled smart card into the computer. Ensure that the card is listed in the Active Smart Cards table. Click Enroll to display the Password dialog. Enter a new key password in the Enter a password field. Confirm the new password in the Re-Enter a password field. Click OK to begin the enrollment. If the TPS has been configured for user authentication, enter the user credentials in the authentication dialog, and click Submit . If the TPS has been configured to archive keys to the DRM, the enrollment process will begin generating and archiving keys. When the enrollment is complete, the status of the smart card is displayed as ENROLLED. | [
"http s ://server.example.com: 9444/ca/ee/ca/"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/Using_the_Enterprise_Security_Client-Managing_Smart_Cards |
Chapter 2. RHOSO MariaDB Galera clusters | Chapter 2. RHOSO MariaDB Galera clusters Red Hat OpenStack Services on OpenShift (RHOSO) deploys the following two MariaDB Galera clusters: openstack , the cluster hosting the databases for all RHOSO services. openstack-cell1 , the cluster hosting the databases specific to the Compute service (nova) for cell1 . Galera Custom Resources (CRs) configure both clusters. You can use the oc get galera command to retrieve more information about these Galera CRs, as shown in the following example: The Message and Ready columns show the startup state and the service availability of the Galera CR, specified by the Name column. When the Ready condition is True , the pods are started and ready to accept traffic. The mariadb-operator performs the following Galera cluster operations: Creates the pods that host the mysqld servers. Runs the logic for bootstrapping a Galera cluster. For example, the mariadb-operator starts the cluster using the most recent copy of the Galera database. Monitors the running Galera pods. Restarts the pods when the pods fail their health check. The mariadb-operator creates an OpenShift service object called openstack , which provides the IP address that the OpenStack services use to access the database. You can use the following command to reveal this IP address: The mariadb-operator also creates a 'headless' DNS service for the Galera pods, called openstack-galera that is only used for the internal Galera cluster communication. OpenStack pods never access this service. RHOSO marks a Galera pod as available based on its Readiness health check. When the Galera cluster is bootstrapped by the mariadb-operator , the first available Galera pod receives all the incoming database traffic. The other available Galera pods remain in a state of hot standby. When the active pod is disconnected from the Galera cluster, for any reason, such as pod stop or pod restart, failover occurs. At this time, the first available Galera pod in hot standby becomes the active pod, and the openstack service is updated accordingly. You can use the following commands to troubleshoot the Galera services: When you analyze the openstack service, IP field specifies the IP address assigned to the service and Endpoints field specifies the IP address of the currently active Galera pod that is receiving all the traffic. When you analyze the headless openstack-galera service, the IP field is not specified but the Endpoints field specify the IP addresses of all the Galera pods, of which there are typically 3. 2.1. Monitoring Galera startup You can use the following command to monitor the startup of the Galera pods for the required Galera Custom Resource (CR): Replace <cr> with the required Galera CR, which by default, is either openstack or openstack-cell1 . When you analyze the Galera CR: The Status reports the startup status of the Galera cluster. The Conditions report the status of the prerequisites that the Galera pods need to start. Note The Ready condition is true only when all the other conditions are True . When the mariadb-operator bootstraps a Galera cluster, it gathers information from every database replica, and then stores it in transient attributes. These transient Attributes appear in the Status of the Galera CR when the cluster is inspected while the Galera cluster is stopped and being restarted: Before starting a Galera Cluster, the mariadb-operator starts all the Galera pod replicas in a waiting state. Even if you can see the pods by using the oc get pods command, they have not started mysqld servers yet. The mariadb-operator introspects the content of the database copy of each pod to extract the Seqno , the database sequence number. After the mariadb-operator extracts the Seqno from all of the pods, it decides which pod holds the most recent version of the database and bootstraps a new Galera cluster from this pod. This pod starts a mysqld server and a transient attribute gcomm:// appears in the status of the Galera CR. When the first mysqld server is ready to serve traffic, the attribute Bootstrapped becomes true , and the transient Attributes for this pod are removed from the status of the Galera CR. | [
"oc get galera NAME READY MESSAGE openstack True Setup complete openstack-cell1 True Setup complete",
"oc get service -l mariadb/name=openstack NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openstack ClusterIP 10.217.5.210 <none> 3306/TCP 7h",
"oc describe service openstack Name: openstack Namespace: openstack Labels: app=mariadb cr=mariadb-openstack mariadb/name=openstack mariadb/namespace=openstack mariadb/uid=796b4c64-fb39-4144-817f-34d2b309eb30 owner=mariadb-operator Annotations: <none> Selector: app=galera,cr=galera-openstack,statefulset.kubernetes.io/pod-name=openstack-galera-2 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 172.30.133.220 IPs: 172.30.133.220 Port: database 3306/TCP TargetPort: 3306/TCP Endpoints: 192.168.56.28:3306 Session Affinity: None Events: <none>",
"oc describe service openstack-galera Name: openstack-galera Namespace: openstack Labels: <none> Annotations: <none> Selector: app=galera,cr=galera-openstack Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: None IPs: None Port: mysql 3306/TCP TargetPort: 3306/TCP Endpoints: 192.168.48.52:3306,192.168.52.43:3306,192.168.56.23:3306 Session Affinity: None Events: <none>",
"oc describe galera <cr>",
"Status: Conditions: Last Transition Time: 2024-04-22T07:32:06Z Message: Setup complete Reason: Ready Status: True Type: Ready Last Transition Time: 2024-04-22T07:31:49Z Message: Deployment completed Reason: Ready Status: True Type: DeploymentReady Last Transition Time: 2024-04-22T07:31:11Z Message: Exposing service completed Reason: Ready Status: True Type: ExposeServiceReady Last Transition Time: 2024-04-22T07:31:11Z Message: Input data complete Reason: Ready Status: True Type: InputReady Last Transition Time: 2024-04-22T07:31:11Z Message: RoleBinding created Reason: Ready Status: True Type: RoleBindingReady Last Transition Time: 2024-04-22T07:31:11Z Message: Role created Reason: Ready Status: True Type: RoleReady Last Transition Time: 2024-04-22T07:31:11Z Message: ServiceAccount created Reason: Ready Status: True Type: ServiceAccountReady Last Transition Time: 2024-04-22T07:31:11Z Message: Service config create completed Reason: Ready Status: True Type: ServiceConfigReady Last Transition Time: 2024-04-22T07:31:11Z Message: Input data complete Reason: Ready Status: True Type: TLSInputReady",
"Status: Attributes: openstack-galera-0: Seqno: 1232 openstack-galera-1: Container ID: cri-o://f56ec2389e878b462a54f5255dad83db29daf4d8e8cda338904bfd353b370165 Gcomm: gcomm:// Seqno: 1232 openstack-galera-2: Seqno: 1231 Bootstrapped: false"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/monitoring_high_availability_services/assembly_rhoso-galera-clusters |
Chapter 1. Introduction | Chapter 1. Introduction This document provides information on planning and implementing automated takeover for SAP HANA Scale-Out System Replication deployments. SAP HANA System Replication in this solution provides continuous synchronization between two SAP HANA databases to support high availability and disaster recovery. The challenges of real implementations are typically more complex than can be covered in upfront testing. Please ensure that your environment is tested extensively. Red Hat recommends contracting a certified consultant familiar with both SAP HANA and the Pacemaker-based RHEL High Availability Add-On to implement the setup and subsequent operation. As SAP HANA takes on a central function as the primary database platform for SAP landscapes, requirements for stability and reliability increase dramatically. Red Hat Enterprise Linux (RHEL) for SAP Solutions meets those requirements by enhancing native SAP HANA replication and failover technology to automate the takeover process. During a failover in a SAP HANA Scale-Out System Replication deployment, a system administrator must manually instruct the application to perform a takeover to the secondary environment in case there is an issue in the primary environment. To automate this process Red Hat provides a complete solution for managing SAP HANA Scale-Out System Replication based on the RHEL HA Add-On that is part of the RHEL for SAP Solutions subscription. This documentation provides the concepts, planning, and high-level instructions on how to set up an automated SAP HANA Scale-Out System Replication solution using RHEL for SAP Solutions. This solution has been extensively tested and is proven to work, but the challenges of a real implementation are typically more complex than what this solution can cover. Red Hat therefore recommends that a certified consultant familiar with both SAP HANA and the Pacemaker-based RHEL High Availability Add-On sets up and subsequently services such a solution. For more information about RHEL for SAP Solutions, see Overview of Red Hat Enterprise Linux for SAP Solutions Subscription . This solution is for experienced Linux Administrators and SAP Certified Technology Associates. The solution contains planning and deployment information for SAP HANA Scale-Out with System Replication, as well as information on Pacemaker integration with RHEL 9. Building an SAP HANA scale-out environment with HANA System Replication and Pacemaker connectivity combines several complex technologies. This document contains references to SAP Notes or documentation that explains SAP HANA configuration. An SAP HANA system as a scale-out cluster primarily extends a growing SAP HANA landscape with new hardware easily. For this feature, essential components of the infrastructure, such as storage and network, require the use of shared resources. Based on this configuration, it is possible to extend the availability of the environment by using standby nodes, providing another level of High Availability solution before a site takeover is initiated. The SAP HANA scale-out solution can be extended to include two or more completely independent scale-out solutions that act as additional mirrors. The system replication process mirrors databases according to the active/passive method with maximum performance. The communication takes place entirely over the network. Additional infrastructure components are not needed. Pacemaker automates the system replication process when critical components fail. For this purpose, data from the scale-out environment as well as from the system replication process are evaluated to ensure continued operation. The cluster manages the primary IP address that the client uses to connect to the database. This ensures that in the event of the cluster triggering a database takeover, the clients can still connect to the active instance. 1.1. Supporting responsibilities For SAP HANA appliance setups, SAP, hardware partners /cloud providers support the following: Supported hardware and environments SAP HANA Storage configuration SAP HANA Scale-Out configuration (SAP cluster setup) SAP HANA System Replication (SAP cluster setup) Red Hat supports the following: Basic OS configuration for running SAP HANA on RHEL, based on SAP guidelines RHEL HA Add-On Red Hat HA solutions for SAP HANA Scale-Out System Replication For more information, see SAP HANA Master Guide - Operating SAP HANA - SAP HANA Appliance - Roles and Responsibilities . For TDI setups, take a look at SAP HANA Master Guide - Operating SAP HANA - SAP HANA Tailored Data Center Intergration . 1.2. SAP HANA Scale-Out The process of scaling SAP HANA is very dynamic. During the initial setup of a server instance of a scale-up SAP HANA database, the system can be extended by additional CPUs and memory. If this expansion level is no longer sufficient, SAP extends the environment to a scale-out environment. With a properly prepared infrastructure, additional server instances can be added to the database. Note To "scale-out", add SAP HANA database 1-n server to an existing single node database. Currently, all nodes have to be the same size in terms of CPU and RAM. The configuration of all replicated database sites has to be the same. So you have to upgrade the number of HANA nodes first on all sites before you resync the database. The prerequisite is shared storage and a corresponding network connection for all nodes. The shared storage is used to exchange data and to use standby nodes, which can take over the functionality of existing nodes in the event of a failure. Figure 1: Overview scale-up and scale-out systems Master nameserver A HANA Scale-Out environment has a master configuration that defines a running master instance on one of the nodes. These master instances are the primary contact for the application server. Up to three master roles can be defined for a scale-out high-availability configuration. The master roles are switched automatically if a failure occurs. This master configuration is compatible with the standby host configuration, in which a failed host can take over the tasks of a failed master node. Figure 2: Scale-out functionality of the used storage 1.3. Scale-Out storage configuration Scale-out storage configuration allows SAP HANA to be flexible in the scale-out environment and to dynamically move the functionality of the nodes in the event of a failure. Since the data is made available to all nodes, the SAP instances only have to be ready to take over the process of the failed components. There are two different shared storage scenarios for SAP HANA scale-out environments: The first scenario is shared file systems, which offer a file system of all directories over NFS or IBM's GPFS. In this scenario, the data is available on all nodes, all the time. The second scenario is non-shared storage, which is used to exclusively integrate the required data when needed. All data is managed over the SAP HANA storage connector API, and it removes access from nodes using the appropriate mechanisms, for example, SCSI 3 reservations. For both scenarios, ensure that the /hana/shared directory is made available as a shared file system. This directory must be available and shared independently of the scenarios. Note If you want to monitor these shared file systems, you can optionally create file system resources. The entries in the /etc/fstab should be removed; the mount is only managed by the file system resources. 1.3.1. Shared storage Shared file systems deliver the required data on every host. When configured, SAP HANA accesses the necessary data. The data can be shared easily because the shared directories are mounted on all nodes. The installation proceeds as normal after deployment. SAP HANA has access to all directories: /hana/data , /hana/log and /hana/shared . Figure 3: Functionality and working paths of the scale-out process with shared storage 1.3.2. Non-shared storage A non-shared storage configuration is more complex than a shared storage configuration. It requires a supported storage component and an individual configuration of the storage connector in the SAP HANA installation process. The SAP HANA database reconfigures the RHEL systems with several internal changes, for example, sudo access, lvm, or multipath. With every change of the node definition, SAP HANA is changing access to the storage directly over SCSI3 reservations. The non-shared storage configuration is more optimised than the shared storage configuration because it has direct access to the storage system. Figure 4: Functionality and working paths of the scale-out process with the storage connector 1.4. SAP HANA System Replication SAP HANA System Replication provides a way for its SAP HANA environment to replicate the database across multiple sites. The network replicates the data and preloads it into the second SAP HANA installation. SAP HANA System Replication significantly reduces recovery time in case there is a failure of the primary HANA Scale-Out site. You must ensure that all replicated environments are built with identical specifications across hardware, software, and configuration settings. 1.5. Network configuration Three networks are the minimum network requirements for an SAP HANA Scale-Out System Replication setup that is managed by the RHEL HA Add-On. Nevertheless, an SAP-recommended network configuration should be used to build up a high performing production environment. The three networks are: Public network: Required for the connection of the application server and clients (minimum requirement). Communication network: Required for system replication communication, internode communication, and storage configuration. Heartbeat network: Required for HA cluster communication. The recommended configuration is designed with the following networks: Application server network Client network Replication network Storage network Two internode networks Backup network Admin network Pacemaker network Based on the configuration of this solution, changes in the SAP HANA configuration process are required. The system replication hostname resolution is adjusted to the network that is used for the system replication. This is described in the SAP HANA Network Requirements documentation. Figure 5: Example Network configuration of two scale-out systems connected over SAP HANA system replication 1.6. RHEL HA Add-On In the solution described in this document, the RHEL HA Add-On is used for ensuring the operation of SAP HANA Scale-Out System Replication across two sites. For this reason, resource agents published specifically for SAP HANA scale-out environments are used, which manage the SAP HANA Scale-Out System Replication environment. Based on the current status of the SAP HANA Scale-Out System Replication environment, a decision can be made to either switch the active master node to another available standby node or to switch the entire active side of the scale-out system replication environment to the second site. For this solution, a fencing mechanism is configured to avoid split-brain constellations. Figure 6: Overview of Pacemaker integration based on a system replication environment For more information about using the RHEL HA Add-On to set up HA clusters on RHEL 9, see the following documentation: Configuring and managing high availability clusters Support Policies for RHEL High Availability Clusters It is important to understand the scale-out architecture and how SAP HANA System Replication works. It works independent of the pacemaker configuration. Nevertheless, only the SAP-HANA scale-out resource agent of a specific release works with 2 or more sites because the resource agent gets the information of all environments. The resource agent only manages 2 sites that are part of the pacemaker cluster. At first, the resource agent is watching for a stable scale-out environment on every site. It checks if enough SAP HANA scale-out master nameserver nodes are configured and in a valid state. Subsequently, the resource agent checks the system replication state. If everything is working correctly, it attaches the virtual IP address to the active master node on the master site of the system replication. In a failure state, the cluster is configured to switch the system replication configuration automatically. The definition of a failure state is dependent on the configuration of the master nameserver. For example, when one master nameserver is configured, the cluster switches directly to the other datacenter if the master node fails. If up to three master nameservers are configured, the SAP HANA environment heals itself before switching to the other datacenter. Pacemaker is working with the scoring numbers to make decisions on what should be done. When running SAP HANA, it is very important that these parameters are not changed in a cluster setup. Pacemaker configuration is also based on fencing configuration that uses Shoot The Other Node In The Head (STONITH). An unresponsive node does not mean that it is not accessing data. Use STONITH to fence the node and be sure that data is safe. STONITH protects data from being corrupted by rogue nodes or concurrent access. If the communication between the two sites is lost, both sites may believe they are able to continue working, which can cause data corruption. This is also called a split-brain scenario. To prevent this, a quorum can be added, which helps to decide who is able to continue. A quorum can either be an additional node or a qdevice . In our example, we are using the additional node majoritymaker . Figure 7: Example of system replication with scale out 1.7. Resource agents The cluster configuration is working with two resource agents. 1.7.1. SAPHanaTopology resource agent The SAPHanaTopology resource agent is a cloned resource that receives all of its data from the SAP HANA environment. A configuration process in SAP HANA called "system replication hook" generates this data. Based on this data, the resource agent calculates the Pacemaker scoring for the Pacemaker service. The scoring is used by the cluster to decide if it should initiate switching the system replication from one site to the other. If the scoring value is higher than a predefined value, the cluster switches the system replication. 1.7.2. SAPHanaController resource agent The SAPHanaController resource agent controls the SAP HANA environment and executes all commands for an automatic switch, or it changes the active site of the system replication. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/asmb_intro_automating-sap-hana-scale-out-v9 |
Red Hat Data Grid | Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_code_tutorials/red-hat-data-grid |
12.29. SAP Gateway Translator | 12.29. SAP Gateway Translator Teiid provides a translator for SAP Gateway using the OData protocol. This translator is extension of OData Translator and uses Teiid WS resource adapter for making web service calls. This translator understands the most of the SAP specific OData extensions to the metadata. When the metadata is imported from SAP Gateway, the Teiid models are created to accordingly for SAP specific EntitySet and Property annotations. These "execution properties" are supported in this translator: Table 12.30. Execution Properties Property Description Default DatabaseTimeZone The time zone of the database. Used when fetchings date, time, or timestamp values The system default time zone SupportsOdataCount Supports USDcount True SupportsOdataFilter Supports USDfilter True SupportsOdataOrderBy Supports USDorderby True SupportsOdataSkip Supports USDskip True SupportsOdataTop Supports USDtop True Warning If metadata on your service defined "pagable" and/or "topable" as "false' on any table, you must turn off "SupportsOdataTop" and "SupportsOdataSkip" execution-properties in your translator, so that you will not end up with wrong results. SAP metadata has capability to control these in a fine grained fashion any on EnitySet, however Teiid can only control these at translator level. Warning Sample examples defined at http://scn.sap.com/docs/DOC-31221, we found to be lacking in full metadata in certain examples. For example, "filterable" clause never defined on some properties, but if you send a request USDfilter it will silently ignore it. You can verify this behavior by directly executing the REST service using a web browser with respective query. So, Make sure you have implemented your service correctly, or you can turn off certain features in this translator by using "execution properties" override. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sap_gateway_translator |
Red Hat Ansible Automation Platform installation guide | Red Hat Ansible Automation Platform installation guide Red Hat Ansible Automation Platform 2.4 Install Ansible Automation Platform Red Hat Customer Content Services | [
"dnf install firewalld",
"firewall-cmd --permanent --add-service=<service>",
"firewall-cmd --reload",
"psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>:",
"-h hostname --host=hostname",
"-d dbname --dbname=dbname",
"-U username --username=username",
"[database] pg_host='db.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='redhat'",
"psql -d <automation hub database> -c \"SELECT * FROM pg_available_extensions WHERE name='hstore'\"",
"name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)",
"name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)",
"dnf install postgresql-contrib",
"psql -d <automation hub database> -c \"CREATE EXTENSION hstore;\"",
"CREATE EXTENSION",
"name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)",
"yum -y install fio",
"fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1 > /tmp/fio_benchmark_write_iops.log 2>> /tmp/fio_write_iops_error.log",
"fio --name=read_iops --directory=/tmp --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1 > /tmp/fio_benchmark_read_iops.log 2>> /tmp/fio_read_iops_error.log",
"cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 [...] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 [...]",
"cd /opt/ansible-automation-platform/installer/",
"cd ansible-automation-platform-setup-bundle-<latest-version>",
"cd ansible-automation-platform-setup-<latest-version>",
"[automationcontroller] controller.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key",
"[automationcontroller] controller.example.com [automationhub] automationhub.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' The default install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key",
"automationhub_pg_host='192.0.2.10' automationhub_pg_port=5432",
"[database] 192.0.2.10",
"[automationhub] automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18 automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20 automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22",
"USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True",
"mkdir /var/lib/pulp/",
"srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context=\"system_u:object_r:var_lib_t:s0\" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context=\"system_u:object_r:httpd_sys_content_rw_t:s0\" 0 0",
"systemctl daemon-reload",
"mount /var/lib/pulp",
"mkdir /var/lib/pulp/pulpcore_static",
"mount -a",
"setup.sh -- -b --become-user root",
"systemctl stop pulpcore.service",
"systemctl edit pulpcore.service",
"[Unit] After=network.target var-lib-pulp.mount",
"systemctl enable remote-fs.target",
"systemctl reboot",
"chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem",
"systemctl stop pulpcore.service",
"umount /var/lib/pulp/pulpcore_static",
"umount /var/lib/pulp/",
"srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context=\"system_u:object_r:pulpcore_var_lib_t:s0\" 0 0",
"mount -a",
"{\"file\": \"filename\", \"signature\": \"filename.asc\"}",
"#!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH=\"USD1.asc\" ADMIN_ID=\"USDPULP_SIGNING_KEY_FINGERPRINT\" PASSWORD=\"password\" Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID --armor --output USDSIGNATURE_PATH USDFILE_PATH Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"file\\\": \\\"USDFILE_PATH\\\", \\\"signature\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi",
"[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh",
"automationhub_authentication_backend = \"ldap\" automationhub_ldap_server_uri = \"ldap://ldap:389\" (for LDAPs use automationhub_ldap_server_uri = \"ldaps://ldap-server-fqdn\") automationhub_ldap_bind_dn = \"cn=admin,dc=ansible,dc=com\" automationhub_ldap_bind_password = \"GoodNewsEveryone\" automationhub_ldap_user_search_base_dn = \"ou=people,dc=ansible,dc=com\" automationhub_ldap_group_search_base_dn = \"ou=people,dc=ansible,dc=com\"",
"auth_ldap_user_search_scope= 'SUBTREE' auth_ldap_user_search_filter= '(uid=%(user)s)' auth_ldap_group_search_scope= 'SUBTREE' auth_ldap_group_search_filter= '(objectClass=Group)' auth_ldap_group_type_class= 'django_auth_ldap.config:GroupOfNamesType'",
"#ldapextras.yml --- ldap_extra_settings: <LDAP_parameter>: <Values>",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {\"is_superuser\": \"cn=pah-admins,ou=groups,dc=example,dc=com\",}",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {\"is_superuser\": \"cn=pah-admins,ou=groups,dc=example,dc=com\",}",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_MIRROR_GROUPS: True",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_ATTR_MAP: {\"first_name\": \"givenName\", \"last_name\": \"sn\", \"email\": \"mail\",}",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com'",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_DENY_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com'",
"#ldapextras.yml --- ldap_extra_settings: GALAXY_LDAP_LOGGING: True",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_CACHE_TIMEOUT: 3600",
"Operation unavailable without authentication",
"GALAXY_LDAP_DISABLE_REFERRALS = true",
"[automationcontroller] controller.example.com [automationhub] automationhub.example.com [automationedacontroller] automationedacontroller.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' Automation hub configuration automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' Automation Event-Driven Ansible controller configuration automationedacontroller_admin_password='<eda-password>' automationedacontroller_pg_host='data.example.com' automationedacontroller_pg_port=5432 automationedacontroller_pg_database='automationedacontroller' automationedacontroller_pg_username='automationedacontroller' automationedacontroller_pg_password='<password>' Keystore file to install in SSO node sso_custom_keystore_file='/path/to/sso.jks' This install will deploy SSO with sso_use_https=True Keystore password is required for https enabled SSO sso_keystore_password='' This install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key Boolean flag used to verify Automation Controller's web certificates when making calls from Automation Event-Driven Ansible controller. automationedacontroller_controller_verify_ssl = true # Certificate and key to install in Automation Event-Driven Ansible controller node automationedacontroller_ssl_cert=/path/to/automationeda.crt automationedacontroller_ssl_key=/path/to/automationeda.key",
"automationedacontroller_safe_plugins: \"ansible.eda.webhook, ansible.eda.alertmanager\"",
"sudo ./setup.sh",
"subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms --enable rhel-8-for-x86_64-appstream-rpms",
"dnf install yum-utils reposync -m --download-metadata --gpgcheck -p /path/to/download",
"sudo dnf install httpd",
"/etc/httpd/conf.d/repository.conf DocumentRoot '/path/to/repos' <LocationMatch \"^/+USD\"> Options -Indexes ErrorDocument 403 /.noindex.html </LocationMatch> <Directory '/path/to/repos'> Options All Indexes FollowSymLinks AllowOverride None Require all granted </Directory>",
"sudo chown -R apache /path/to/repos",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/path/to/repos(/.*)?\" sudo restorecon -ir /path/to/repos",
"sudo systemctl enable --now httpd.service",
"sudo firewall-cmd --zone=public --add-service=http -add-service=https --permanent sudo firewall-cmd --reload",
"[Local-BaseOS] name=Local BaseOS baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [Local-AppStream] name=Local AppStream baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
"mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd",
"mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd",
"[dvd-BaseOS] name=DVD for RHEL - BaseOS baseurl=file:///media/rheldvd/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [dvd-AppStream] name=DVD for RHEL - AppStream baseurl=file:///media/rheldvd/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
"rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release",
"Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com]",
"tar xvf ansible-automation-platform-setup-bundle-2.4-1.tar.gz cd ansible-automation-platform-setup-bundle-2.4-1",
"[automationcontroller] automationcontroller.example.org ansible_connection=local [automationcontroller:vars] peers=execution_nodes [automationhub] automationhub.example.org [all:vars] admin_password='password123' pg_database='awx' pg_username='awx' pg_password='dbpassword123' receptor_listener_port=27199 automationhub_admin_password='hubpassword123' automationhub_pg_host='automationcontroller.example.org' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password='dbpassword123' automationhub_pg_sslmode='prefer'",
"sudo -i cd /path/to/ansible-automation-platform-setup-bundle-2.4-1 ./setup.sh",
"cp /etc/pulp/certs/root.crt /etc/pki/ca-trust/source/anchors/automationhub-root.crt update-ca-trust",
"[galaxy] server_list = private_hub [galaxy_server.private_hub] url=https://<hub_fqdn>/api/galaxy/ token=<token_from_private_hub>",
"ansible-galaxy collection publish <collection_tarball>",
"podman image save localhost/custom-ee:latest | gzip -c custom-ee-latest.tar.gz",
"podman image load -i custom-ee-latest.tar.gz podman image tag localhost/custom-ee <hub_fqdn>/custom-ee:latest podman login <hub_fqdn> --tls-verify=false podman push <hub_fqdn>/custom-ee:latest",
"tar -xzvf ansible-automation-platform-setup-bundle-2.4-3-x86_64.tar.gz cd ansible-automation-platform-setup-bundle-2.4-3-x86_64/bundle/packages/el8/repos/ sudo dnf install ansible-builder-3.0.0-2.el8ap.noarch.rpm python39-requirements-parser-0.2.0-4.el8ap.noarch.rpm python39-bindep-2.10.2-3.el8ap.noarch.rpm python39-jsonschema-4.16.0-1.el8ap.noarch.rpm python39-pbr-5.8.1-2.el8ap.noarch.rpm python39-distro-1.6.0-3.el8pc.noarch.rpm python39-packaging-21.3-2.el8ap.noarch.rpm python39-parsley-1.3-2.el8pc.noarch.rpm python39-attrs-21.4.0-2.el8pc.noarch.rpm python39-pyrsistent-0.18.1-2.el8ap.x86_64.rpm python39-pyparsing-3.0.9-1.el8ap.noarch.rpm",
"mkdir USDHOME/custom-ee USDHOME/custom-ee/files cd USDHOME/custom-ee/",
"cat execution-environment.yml --- version: 3 images: base_image: name: private-hub.example.com/ee-minimal-rhel8:latest dependencies: python: requirements.txt galaxy: requirements.yml additional_build_files: - src: files/ansible.cfg dest: configs - src: files/pip.conf dest: configs - src: files/hub-ca.crt dest: configs # uncomment if custom RPM repositories are required #- src: files/custom.repo # dest: configs additional_build_steps: prepend_base: # copy a custom pip.conf to override the location of the PyPI content - ADD _build/configs/pip.conf /etc/pip.conf # remove the default UBI repository definition - RUN rm -f /etc/yum.repos.d/ubi.repo # copy the hub CA certificate and update the trust store - ADD _build/configs/hub-ca.crt /etc/pki/ca-trust/source/anchors - RUN update-ca-trust # if needed, uncomment to add a custom RPM repository configuration #- ADD _build/configs/custom.repo /etc/yum.repos.d/custom.repo prepend_galaxy: - ADD _build/configs/ansible.cfg ~/.ansible.cfg",
"cat files/ansible.cfg [galaxy] server_list = private_hub [galaxy_server.private_hub] url = https://private-hub.example.com/api/galaxy/",
"cat files/pip.conf [global] index-url = https://<pypi_mirror_fqdn>/ trusted-host = <pypi_mirror_fqdn>",
"cat files/custom.repo [ubi-8-baseos] name = Red Hat Universal Base Image 8 (RPMs) - BaseOS baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-baseos enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1 [ubi-8-appstream] name = Red Hat Universal Base Image 8 (RPMs) - AppStream baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-appstream enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1",
"cd USDHOME/custom-ee tree . . ├── bindep.txt ├── execution-environment.yml ├── files │ ├── ansible.cfg │ ├── custom.repo │ ├── hub-ca.crt │ └── pip.conf ├── requirements.txt └── requirements.yml 1 directory, 8 files",
"export ANSIBLE_GALAXY_SERVER_PRIVATE_HUB_TOKEN=<your_token>",
"cd USDHOME/custom-ee ansible-builder build -f execution-environment.yml -t private-hub.example.com/custom-ee:latest -v 3",
"podman pull private-hub.example.com/ee-minimal-rhel8:latest --tls-verify=false",
"sudo mkdir /etc/containers/certs.d/private-hub.example.com sudo cp USDHOME/custom-ee/files/hub-ca.crt /etc/containers/certs.d/private-hub.example.com",
"podman images --format \"table {{.ID}} {{.Repository}} {{.Tag}}\" IMAGE ID REPOSITORY TAG b38e3299a65e private-hub.example.com/custom-ee latest 8e38be53b486 private-hub.example.com/ee-minimal-rhel8 latest",
"podman login private-hub.example.com -u admin Password: Login Succeeded! podman push private-hub.example.com/custom-ee:latest",
"ls -1F ansible-automation-platform-setup-bundle-2.2.0-7/ ansible-automation-platform-setup-bundle-2.2.0-7.tar.gz ansible-automation-platform-setup-bundle-2.3-1.2/ ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz",
"cd ansible-automation-platform-setup-bundle-2.2.0-7 sudo ./setup.sh -b cd ..",
"cd ansible-automation-platform-setup-bundle-2.2.0-7 cp inventory ../ansible-automation-platform-setup-bundle-2.3-1.2/ cd ..",
"cd ansible-automation-platform-setup-bundle-2.3-1.2 sudo ./setup.sh"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/red_hat_ansible_automation_platform_installation_guide/index |
Release notes for Red Hat build of OpenJDK 11.0.10 | Release notes for Red Hat build of OpenJDK 11.0.10 Red Hat build of OpenJDK 11 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.10/index |
13.20. Installation Complete | 13.20. Installation Complete Congratulations! Your Red Hat Enterprise Linux installation is now complete! Click the Reboot button to reboot your system and begin using Red Hat Enterprise Linux. Remember to remove any installation media if it is not ejected automatically upon reboot. After your computer's normal power-up sequence has completed, Red Hat Enterprise Linux loads and starts. By default, the start process is hidden behind a graphical screen that displays a progress bar. Eventually, a GUI login screen (or if the X Window System is not installed, a login: prompt) appears. If your system was installed with the X Window System during this installation process, the first time you start your Red Hat Enterprise Linux system, applications to set up your system are launched. These applications guide you through initial configuration of Red Hat Enterprise Linux and allow you to set your system time and date, register your machine with Red Hat Network, and more. See Chapter 30, Initial Setup for information about the configuration process. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-installation-complete-ppc |
18.4. Configuring a VNC Server | 18.4. Configuring a VNC Server To set up graphical desktop sharing between the host and the guest machine using Virtual Network Computing (VNC), a VNC server has to be configured on the guest you wish to connect to. To do this, VNC has to be specified as a graphics type in the devices element of the guest's XML file. For further information, see Section 23.17.11, "Graphical Framebuffers" . To connect to a VNC server, use the virt-viewer utility or the virt-manager interface. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Remote_management_of_guests-Configuring_a_VNC_Server |
Chapter 13. Configuring functions | Chapter 13. Configuring functions 13.1. Accessing secrets and config maps from functions using CLI After your functions have been deployed to the cluster, they can access data stored in secrets and config maps. This data can be mounted as volumes, or assigned to environment variables. You can configure this access interactively by using the Knative CLI, or by manually by editing the function configuration YAML file. Important To access secrets and config maps, the function must be deployed on the cluster. This functionality is not available to a function running locally. If a secret or config map value cannot be accessed, the deployment fails with an error message specifying the inaccessible values. 13.1.1. Modifying function access to secrets and config maps interactively You can manage the secrets and config maps accessed by your function by using the kn func config interactive utility. The available operations include listing, adding, and removing values stored in config maps and secrets as environment variables, as well as listing, adding, and removing volumes. This functionality enables you to manage what data stored on the cluster is accessible by your function. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Run the following command in the function project directory: USD kn func config Alternatively, you can specify the function project directory using the --path or -p option. Use the interactive interface to perform the necessary operation. For example, using the utility to list configured volumes produces an output similar to this: USD kn func config ? What do you want to configure? Volumes ? What operation do you want to perform? List Configured Volumes mounts: - Secret "mysecret" mounted at path: "/workspace/secret" - Secret "mysecret2" mounted at path: "/workspace/secret2" This scheme shows all operations available in the interactive utility and how to navigate to them: Optional. Deploy the function to make the changes take effect: USD kn func deploy -p test 13.1.2. Modifying function access to secrets and config maps interactively by using specialized commands Every time you run the kn func config utility, you need to navigate the entire dialogue to select the operation you need, as shown in the section. To save steps, you can directly execute a specific operation by running a more specific form of the kn func config command: To list configured environment variables: USD kn func config envs [-p <function-project-path>] To add environment variables to the function configuration: USD kn func config envs add [-p <function-project-path>] To remove environment variables from the function configuration: USD kn func config envs remove [-p <function-project-path>] To list configured volumes: USD kn func config volumes [-p <function-project-path>] To add a volume to the function configuration: USD kn func config volumes add [-p <function-project-path>] To remove a volume from the function configuration: USD kn func config volumes remove [-p <function-project-path>] 13.2. Configuring your function project using the func.yaml file The func.yaml file contains the configuration for your function project. Values specified in func.yaml are used when you execute a kn func command. For example, when you run the kn func build command, the value in the build field is used. In some cases, you can override these values with command line flags or environment variables. 13.2.1. Referencing local environment variables from func.yaml fields If you want to avoid storing sensitive information such as an API key in the function configuration, you can add a reference to an environment variable available in the local environment. You can do this by modifying the envs field in the func.yaml file. Prerequisites You need to have the function project created. The local environment needs to contain the variable that you want to reference. Procedure To refer to a local environment variable, use the following syntax: Substitute ENV_VAR with the name of the variable in the local environment that you want to use. For example, you might have the API_KEY variable available in the local environment. You can assign its value to the MY_API_KEY variable, which you can then directly use within your function: Example function name: test namespace: "" runtime: go ... envs: - name: MY_API_KEY value: '{{ env:API_KEY }}' ... 13.2.2. Adding annotations to functions You can add Kubernetes annotations to a deployed Serverless function. Annotations enable you to attach arbitrary metadata to a function, for example, a note about the function's purpose. Annotations are added to the annotations section of the func.yaml configuration file. There are two limitations of the function annotation feature: After a function annotation propagates to the corresponding Knative service on the cluster, it cannot be removed from the service by deleting it from the func.yaml file. You must remove the annotation from the Knative service by modifying the YAML file of the service directly, or by using the OpenShift Container Platform web console. You cannot set annotations that are set by Knative, for example, the autoscaling annotations. 13.2.3. Adding annotations to a function You can add annotations to a function. Similar to a label, an annotation is defined as a key-value map. Annotations are useful, for example, for providing metadata about a function, such as the function's author. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For every annotation that you want to add, add the following YAML to the annotations section: name: test namespace: "" runtime: go ... annotations: <annotation_name>: "<annotation_value>" 1 1 Substitute <annotation_name>: "<annotation_value>" with your annotation. For example, to indicate that a function was authored by Alice, you might include the following annotation: name: test namespace: "" runtime: go ... annotations: author: "[email protected]" Save the configuration. The time you deploy your function to the cluster, the annotations are added to the corresponding Knative service. 13.2.4. Additional resources Getting started with functions Knative documentation on Autoscaling Kubernetes documentation on managing resources for containers Knative documentation on configuring concurrency 13.2.5. Adding function access to secrets and config maps manually You can manually add configuration for accessing secrets and config maps to your function. This might be preferable to using the kn func config interactive utility and commands, for example when you have an existing configuration snippet. 13.2.5.1. Mounting a secret as a volume You can mount a secret as a volume. Once a secret is mounted, you can access it from the function as a regular file. This enables you to store on the cluster data needed by the function, for example, a list of URIs that need to be accessed by the function. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For each secret you want to mount as a volume, add the following YAML to the volumes section: name: test namespace: "" runtime: go ... volumes: - secret: mysecret path: /workspace/secret Substitute mysecret with the name of the target secret. Substitute /workspace/secret with the path where you want to mount the secret. For example, to mount the addresses secret, use the following YAML: name: test namespace: "" runtime: go ... volumes: - configMap: addresses path: /workspace/secret-addresses Save the configuration. 13.2.5.2. Mounting a config map as a volume You can mount a config map as a volume. Once a config map is mounted, you can access it from the function as a regular file. This enables you to store on the cluster data needed by the function, for example, a list of URIs that need to be accessed by the function. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For each config map you want to mount as a volume, add the following YAML to the volumes section: name: test namespace: "" runtime: go ... volumes: - configMap: myconfigmap path: /workspace/configmap Substitute myconfigmap with the name of the target config map. Substitute /workspace/configmap with the path where you want to mount the config map. For example, to mount the addresses config map, use the following YAML: name: test namespace: "" runtime: go ... volumes: - configMap: addresses path: /workspace/configmap-addresses Save the configuration. 13.2.5.3. Setting environment variable from a key value defined in a secret You can set an environment variable from a key value defined as a secret. A value previously stored in a secret can then be accessed as an environment variable by the function at runtime. This can be useful for getting access to a value stored in a secret, such as the ID of a user. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For each value from a secret key-value pair that you want to assign to an environment variable, add the following YAML to the envs section: name: test namespace: "" runtime: go ... envs: - name: EXAMPLE value: '{{ secret:mysecret:key }}' Substitute EXAMPLE with the name of the environment variable. Substitute mysecret with the name of the target secret. Substitute key with the key mapped to the target value. For example, to access the user ID that is stored in userdetailssecret , use the following YAML: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:userdetailssecret:userid }}' Save the configuration. 13.2.5.4. Setting environment variable from a key value defined in a config map You can set an environment variable from a key value defined as a config map. A value previously stored in a config map can then be accessed as an environment variable by the function at runtime. This can be useful for getting access to a value stored in a config map, such as the ID of a user. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For each value from a config map key-value pair that you want to assign to an environment variable, add the following YAML to the envs section: name: test namespace: "" runtime: go ... envs: - name: EXAMPLE value: '{{ configMap:myconfigmap:key }}' Substitute EXAMPLE with the name of the environment variable. Substitute myconfigmap with the name of the target config map. Substitute key with the key mapped to the target value. For example, to access the user ID that is stored in userdetailsmap , use the following YAML: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:userdetailsmap:userid }}' Save the configuration. 13.2.5.5. Setting environment variables from all values defined in a secret You can set an environment variable from all values defined in a secret. Values previously stored in a secret can then be accessed as environment variables by the function at runtime. This can be useful for simultaneously getting access to a collection of values stored in a secret, for example, a set of data pertaining to a user. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For every secret for which you want to import all key-value pairs as environment variables, add the following YAML to the envs section: name: test namespace: "" runtime: go ... envs: - value: '{{ secret:mysecret }}' 1 1 Substitute mysecret with the name of the target secret. For example, to access all user data that is stored in userdetailssecret , use the following YAML: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:userdetailssecret }}' Save the configuration. 13.2.5.6. Setting environment variables from all values defined in a config map You can set an environment variable from all values defined in a config map. Values previously stored in a config map can then be accessed as environment variables by the function at runtime. This can be useful for simultaneously getting access to a collection of values stored in a config map, for example, a set of data pertaining to a user. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For every config map for which you want to import all key-value pairs as environment variables, add the following YAML to the envs section: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:myconfigmap }}' 1 1 Substitute myconfigmap with the name of the target config map. For example, to access all user data that is stored in userdetailsmap , use the following YAML: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:userdetailsmap }}' Save the file. 13.3. Configurable fields in func.yaml You can configure some of the func.yaml fields. 13.3.1. Configurable fields in func.yaml Many of the fields in func.yaml are generated automatically when you create, build, and deploy your function. However, there are also fields that you modify manually to change things, such as the function name or the image name. 13.3.1.1. buildEnvs The buildEnvs field enables you to set environment variables to be available to the environment that builds your function. Unlike variables set using envs , a variable set using buildEnv is not available during function runtime. You can set a buildEnv variable directly from a value. In the following example, the buildEnv variable named EXAMPLE1 is directly assigned the one value: buildEnvs: - name: EXAMPLE1 value: one You can also set a buildEnv variable from a local environment variable. In the following example, the buildEnv variable named EXAMPLE2 is assigned the value of the LOCAL_ENV_VAR local environment variable: buildEnvs: - name: EXAMPLE1 value: '{{ env:LOCAL_ENV_VAR }}' 13.3.1.2. envs The envs field enables you to set environment variables to be available to your function at runtime. You can set an environment variable in several different ways: Directly from a value. From a value assigned to a local environment variable. See the section "Referencing local environment variables from func.yaml fields" for more information. From a key-value pair stored in a secret or config map. You can also import all key-value pairs stored in a secret or config map, with keys used as names of the created environment variables. This examples demonstrates the different ways to set an environment variable: name: test namespace: "" runtime: go ... envs: - name: EXAMPLE1 1 value: value - name: EXAMPLE2 2 value: '{{ env:LOCAL_ENV_VALUE }}' - name: EXAMPLE3 3 value: '{{ secret:mysecret:key }}' - name: EXAMPLE4 4 value: '{{ configMap:myconfigmap:key }}' - value: '{{ secret:mysecret2 }}' 5 - value: '{{ configMap:myconfigmap2 }}' 6 1 An environment variable set directly from a value. 2 An environment variable set from a value assigned to a local environment variable. 3 An environment variable assigned from a key-value pair stored in a secret. 4 An environment variable assigned from a key-value pair stored in a config map. 5 A set of environment variables imported from key-value pairs of a secret. 6 A set of environment variables imported from key-value pairs of a config map. 13.3.1.3. builder The builder field specifies the strategy used by the function to build the image. It accepts values of pack or s2i . 13.3.1.4. build The build field indicates how the function should be built. The value local indicates that the function is built locally on your machine. The value git indicates that the function is built on a cluster by using the values specified in the git field. 13.3.1.5. volumes The volumes field enables you to mount secrets and config maps as a volume accessible to the function at the specified path, as shown in the following example: name: test namespace: "" runtime: go ... volumes: - secret: mysecret 1 path: /workspace/secret - configMap: myconfigmap 2 path: /workspace/configmap 1 The mysecret secret is mounted as a volume residing at /workspace/secret . 2 The myconfigmap config map is mounted as a volume residing at /workspace/configmap . 13.3.1.6. options The options field enables you to modify Knative Service properties for the deployed function, such as autoscaling. If these options are not set, the default ones are used. These options are available: scale min : The minimum number of replicas. Must be a non-negative integer. The default is 0. max : The maximum number of replicas. Must be a non-negative integer. The default is 0, which means no limit. metric : Defines which metric type is watched by the Autoscaler. It can be set to concurrency , which is the default, or rps . target : Recommendation for when to scale up based on the number of concurrently incoming requests. The target option can be a float value greater than 0.01. The default is 100, unless the options.resources.limits.concurrency is set, in which case target defaults to its value. utilization : Percentage of concurrent requests utilization allowed before scaling up. It can be a float value between 1 and 100. The default is 70. resources requests cpu : A CPU resource request for the container with deployed function. memory : A memory resource request for the container with deployed function. limits cpu : A CPU resource limit for the container with deployed function. memory : A memory resource limit for the container with deployed function. concurrency : Hard Limit of concurrent requests to be processed by a single replica. It can be integer value greater than or equal to 0, default is 0 - meaning no limit. This is an example configuration of the scale options: name: test namespace: "" runtime: go ... options: scale: min: 0 max: 10 metric: concurrency target: 75 utilization: 75 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 1000m memory: 256Mi concurrency: 100 13.3.1.7. image The image field sets the image name for your function after it has been built. You can modify this field. If you do, the time you run kn func build or kn func deploy , the function image will be created with the new name. 13.3.1.8. imageDigest The imageDigest field contains the SHA256 hash of the image manifest when the function is deployed. Do not modify this value. 13.3.1.9. labels The labels field enables you to set labels on a deployed function. You can set a label directly from a value. In the following example, the label with the role key is directly assigned the value of backend : labels: - key: role value: backend You can also set a label from a local environment variable. In the following example, the label with the author key is assigned the value of the USER local environment variable: labels: - key: author value: '{{ env:USER }}' 13.3.1.10. name The name field defines the name of your function. This value is used as the name of your Knative service when it is deployed. You can change this field to rename the function on subsequent deployments. 13.3.1.11. namespace The namespace field specifies the namespace in which your function is deployed. 13.3.1.12. runtime The runtime field specifies the language runtime for your function, for example, python . | [
"kn func config",
"kn func config ? What do you want to configure? Volumes ? What operation do you want to perform? List Configured Volumes mounts: - Secret \"mysecret\" mounted at path: \"/workspace/secret\" - Secret \"mysecret2\" mounted at path: \"/workspace/secret2\"",
"kn func config ├─> Environment variables │ ├─> Add │ │ ├─> ConfigMap: Add all key-value pairs from a config map │ │ ├─> ConfigMap: Add value from a key in a config map │ │ ├─> Secret: Add all key-value pairs from a secret │ │ └─> Secret: Add value from a key in a secret │ ├─> List: List all configured environment variables │ └─> Remove: Remove a configured environment variable └─> Volumes ├─> Add │ ├─> ConfigMap: Mount a config map as a volume │ └─> Secret: Mount a secret as a volume ├─> List: List all configured volumes └─> Remove: Remove a configured volume",
"kn func deploy -p test",
"kn func config envs [-p <function-project-path>]",
"kn func config envs add [-p <function-project-path>]",
"kn func config envs remove [-p <function-project-path>]",
"kn func config volumes [-p <function-project-path>]",
"kn func config volumes add [-p <function-project-path>]",
"kn func config volumes remove [-p <function-project-path>]",
"{{ env:ENV_VAR }}",
"name: test namespace: \"\" runtime: go envs: - name: MY_API_KEY value: '{{ env:API_KEY }}'",
"name: test namespace: \"\" runtime: go annotations: <annotation_name>: \"<annotation_value>\" 1",
"name: test namespace: \"\" runtime: go annotations: author: \"[email protected]\"",
"name: test namespace: \"\" runtime: go volumes: - secret: mysecret path: /workspace/secret",
"name: test namespace: \"\" runtime: go volumes: - configMap: addresses path: /workspace/secret-addresses",
"name: test namespace: \"\" runtime: go volumes: - configMap: myconfigmap path: /workspace/configmap",
"name: test namespace: \"\" runtime: go volumes: - configMap: addresses path: /workspace/configmap-addresses",
"name: test namespace: \"\" runtime: go envs: - name: EXAMPLE value: '{{ secret:mysecret:key }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailssecret:userid }}'",
"name: test namespace: \"\" runtime: go envs: - name: EXAMPLE value: '{{ configMap:myconfigmap:key }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailsmap:userid }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ secret:mysecret }}' 1",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailssecret }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:myconfigmap }}' 1",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailsmap }}'",
"buildEnvs: - name: EXAMPLE1 value: one",
"buildEnvs: - name: EXAMPLE1 value: '{{ env:LOCAL_ENV_VAR }}'",
"name: test namespace: \"\" runtime: go envs: - name: EXAMPLE1 1 value: value - name: EXAMPLE2 2 value: '{{ env:LOCAL_ENV_VALUE }}' - name: EXAMPLE3 3 value: '{{ secret:mysecret:key }}' - name: EXAMPLE4 4 value: '{{ configMap:myconfigmap:key }}' - value: '{{ secret:mysecret2 }}' 5 - value: '{{ configMap:myconfigmap2 }}' 6",
"name: test namespace: \"\" runtime: go volumes: - secret: mysecret 1 path: /workspace/secret - configMap: myconfigmap 2 path: /workspace/configmap",
"name: test namespace: \"\" runtime: go options: scale: min: 0 max: 10 metric: concurrency target: 75 utilization: 75 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 1000m memory: 256Mi concurrency: 100",
"labels: - key: role value: backend",
"labels: - key: author value: '{{ env:USER }}'"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/functions/configuring-functions |
Chapter 10. ServiceAccount [v1] | Chapter 10. ServiceAccount [v1] Description ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether pods running as this service account should have an API token automatically mounted. Can be overridden at the pod level. imagePullSecrets array ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata secrets array Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use. Pods are only limited to this list if this service account has a "kubernetes.io/enforce-mountable-secrets" annotation set to "true". This field should not be used to find auto-generated service account token secrets for use outside of pods. Instead, tokens can be requested directly using the TokenRequest API, or service account token secrets can be manually created. More info: https://kubernetes.io/docs/concepts/configuration/secret secrets[] object ObjectReference contains enough information to let you inspect or modify the referred object. 10.1.1. .imagePullSecrets Description ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod Type array 10.1.2. .imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 10.1.3. .secrets Description Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use. Pods are only limited to this list if this service account has a "kubernetes.io/enforce-mountable-secrets" annotation set to "true". This field should not be used to find auto-generated service account token secrets for use outside of pods. Instead, tokens can be requested directly using the TokenRequest API, or service account token secrets can be manually created. More info: https://kubernetes.io/docs/concepts/configuration/secret Type array 10.1.4. .secrets[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.2. API endpoints The following API endpoints are available: /api/v1/serviceaccounts GET : list or watch objects of kind ServiceAccount /api/v1/watch/serviceaccounts GET : watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/serviceaccounts DELETE : delete collection of ServiceAccount GET : list or watch objects of kind ServiceAccount POST : create a ServiceAccount /api/v1/watch/namespaces/{namespace}/serviceaccounts GET : watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/serviceaccounts/{name} DELETE : delete a ServiceAccount GET : read the specified ServiceAccount PATCH : partially update the specified ServiceAccount PUT : replace the specified ServiceAccount /api/v1/watch/namespaces/{namespace}/serviceaccounts/{name} GET : watch changes to an object of kind ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 10.2.1. /api/v1/serviceaccounts HTTP method GET Description list or watch objects of kind ServiceAccount Table 10.1. HTTP responses HTTP code Reponse body 200 - OK ServiceAccountList schema 401 - Unauthorized Empty 10.2.2. /api/v1/watch/serviceaccounts HTTP method GET Description watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. Table 10.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /api/v1/namespaces/{namespace}/serviceaccounts HTTP method DELETE Description delete collection of ServiceAccount Table 10.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ServiceAccount Table 10.5. HTTP responses HTTP code Reponse body 200 - OK ServiceAccountList schema 401 - Unauthorized Empty HTTP method POST Description create a ServiceAccount Table 10.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.7. Body parameters Parameter Type Description body ServiceAccount schema Table 10.8. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 202 - Accepted ServiceAccount schema 401 - Unauthorized Empty 10.2.4. /api/v1/watch/namespaces/{namespace}/serviceaccounts HTTP method GET Description watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. Table 10.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /api/v1/namespaces/{namespace}/serviceaccounts/{name} Table 10.10. Global path parameters Parameter Type Description name string name of the ServiceAccount HTTP method DELETE Description delete a ServiceAccount Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.12. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 202 - Accepted ServiceAccount schema 401 - Unauthorized Empty HTTP method GET Description read the specified ServiceAccount Table 10.13. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ServiceAccount Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.15. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ServiceAccount Table 10.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.17. Body parameters Parameter Type Description body ServiceAccount schema Table 10.18. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 401 - Unauthorized Empty 10.2.6. /api/v1/watch/namespaces/{namespace}/serviceaccounts/{name} Table 10.19. Global path parameters Parameter Type Description name string name of the ServiceAccount HTTP method GET Description watch changes to an object of kind ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/security_apis/serviceaccount-v1 |
8.217. shadow-utils | 8.217. shadow-utils 8.217.1. RHBA-2014:1522 - shadow-utils bug fix update Updated shadow-utils packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The shadow-utils packages include programs for converting UNIX password files to the shadow password format, as well as utilities for managing user and group accounts. Bug Fixes BZ# 787742 Previously, pwconv and grpconv utilities improperly parsed respective shadow and gshadow files with errors. Consequently, when writing corrected shadow and gshadow files, only the first error on two consecutive erroneous lines was corrected. With this update, pwconv and grpconv parse the files with errors correctly, and all lines are corrected in the newly written files. BZ# 890222 Due to a bug in code parsing in the /etc/group file, the useradd command could terminate unexpectedly with a segmentation fault when merging group entries. The parsing code has been fixed, and useradd now correctly merges group entries. BZ# 955769 Previously, the useradd command assigned the SELinux user to the new user being created after creating and populating the home directory of the user. Consequently, the SELinux contexts of the home directory files were incorrect. With this update, the SELinux user is assigned to the newly created user before populating the home directory, and the SELinux contexts on the home directory files for newly created users are now correct. BZ# 956742 Due to improper detection of invalid date specification in the chage command, chage did not fail when used with invalid date specification. With this update, the code of chage properly detects invalid date specification, and fails if an invalid date is specified. BZ# 957782 Prior to this update, the chage command incorrectly handled date in the format of "[month] DD YYYY" as "[month] DD hhmm". As a consequence, if chage was used with such date specification, the date was set to an unexpected value. The updated chage code correctly handles date in the aforementioned format. As a result, if chage is used with such date specification, the date is set to an expected value. BZ# 993049 Previously, the newgrp command always tried to find a group with a matching group ID (GID) within all the groups on the system. If the groups were stored on the LDAP server, it caused large data to be pulled from the LDAP server on each invocation of newgrp. The underlying source code has been fixed, and newgrp no longer tries to find a matching group among all the groups on the system if the user is a member of the group specified on the command line. Thus no extra data is pulled from the LDAP server. BZ# 1016516 The usermod code handled improperly the creation of a new entry in the /etc/shadow file. As a consequence, the "usermod -p" command failed to set the new password if the entry in the /etc/shadow file was missing. The updated usermod code properly creates a new entry in /etc/shadow if it is missing, and the "usermod -p" command sets the new password correctly even if the user's entry in /etc/shadow is missing. Users of shadow-utils are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/shadow-utils |
Chapter 5. Configuration options | Chapter 5. Configuration options This chapter lists the available configuration options for AMQ JMS. JMS configuration options are set as query parameters on the connection URI. For more information, see Section 4.3, "Connection URIs" . 5.1. JMS options These options control the behaviour of JMS objects such as Connection , Session , MessageConsumer , and MessageProducer . jms.username The user name the client uses to authenticate the connection. jms.password The password the client uses to authenticate the connection. jms.clientID The client ID that the client applies to the connection. jms.forceAsyncSend If enabled, all messages from a MessageProducer are sent asynchronously. Otherwise, only certain kinds, such as non-persistent messages or those inside a transaction, are sent asynchronously. It is disabled by default. jms.forceSyncSend If enabled, all messages from a MessageProducer are sent synchronously. It is disabled by default. jms.forceAsyncAcks If enabled, all message acknowledgments are sent asynchronously. It is disabled by default. jms.localMessageExpiry If enabled, any expired messages received by a MessageConsumer are filtered out and not delivered. It is enabled by default. jms.localMessagePriority If enabled, prefetched messages are reordered locally based on their message priority value. It is disabled by default. jms.validatePropertyNames If enabled, message property names are required to be valid Java identifiers. It is enabled by default. jms.receiveLocalOnly If enabled, calls to receive with a timeout argument check a consumer's local message buffer only. Otherwise, if the timeout expires, the remote peer is checked to ensure there are really no messages. It is disabled by default. jms.receiveNoWaitLocalOnly If enabled, calls to receiveNoWait check a consumer's local message buffer only. Otherwise, the remote peer is checked to ensure there are really no messages available. It is disabled by default. jms.queuePrefix An optional prefix value added to the name of any Queue created from a Session . jms.topicPrefix An optional prefix value added to the name of any Topic created from a Session . jms.closeTimeout The time in milliseconds for which the client waits for normal resource closure before returning. The default is 60000 (60 seconds). jms.connectTimeout The time in milliseconds for which the client waits for connection establishment before returning with an error. The default is 15000 (15 seconds). jms.sendTimeout The time in milliseconds for which the client waits for completion of a synchronous message send before returning an error. By default the client waits indefinitely for a send to complete. jms.requestTimeout The time in milliseconds for which the client waits for completion of various synchronous interactions like opening a producer or consumer (excluding send) with the remote peer before returning an error. By default the client waits indefinitely for a request to complete. jms.clientIDPrefix An optional prefix value used to generate client ID values when a new Connection is created by the ConnectionFactory . The default is ID: . jms.connectionIDPrefix An optional prefix value used to generate connection ID values when a new Connection is created by the ConnectionFactory . This connection ID is used when logging some information from the Connection object, so a configurable prefix can make breadcrumbing the logs easier. The default is ID: . jms.populateJMSXUserID If enabled, populate the JMSXUserID property for each sent message using the authenticated user name from the connection. It is disabled by default. jms.awaitClientID If enabled, a connection with no client ID configured in the URI waits for a client ID to be set programmatically, or for confirmation that none can be set, before sending the AMQP connection "open". It is enabled by default. jms.useDaemonThread If enabled, a connection uses a daemon thread for its executor, rather than a non-daemon thread. It is disabled by default. jms.tracing The name of a tracing provider. Supported values are opentracing and noop . The default is noop . Prefetch policy options Prefetch policy determines how many messages each MessageConsumer fetches from the remote peer and holds in a local "prefetch" buffer. jms.prefetchPolicy.queuePrefetch The default is 1000. jms.prefetchPolicy.topicPrefetch The default is 1000. jms.prefetchPolicy.queueBrowserPrefetch The default is 1000. jms.prefetchPolicy.durableTopicPrefetch The default is 1000. jms.prefetchPolicy.all This can be used to set all prefetch values at once. The value of prefetch can affect the distribution of messages to multiple consumers on a queue or shared subscription. A higher value can result in larger batches sent at once to each consumer. To achieve more even round-robin distribution, use a lower value. Redelivery policy options Redelivery policy controls how redelivered messages are handled on the client. jms.redeliveryPolicy.maxRedeliveries Controls when an incoming message is rejected based on the number of times it has been redelivered. A value of 0 indicates that no message redeliveries are accepted. A value of 5 allows a message to be redelivered five times, and so on. The default is -1, meaning no limit. jms.redeliveryPolicy.outcome Controls the outcome applied to a message once it has exceeded the configured maxRedeliveries value. Supported values are: ACCEPTED , REJECTED , RELEASED , MODIFIED_FAILED and MODIFIED_FAILED_UNDELIVERABLE . The default value is MODIFIED_FAILED_UNDELIVERABLE . Message ID policy options Message ID policy controls the data type of the message ID assigned to messages sent from the client. jms.messageIDPolicy.messageIDType By default, a generated String value is used for the message ID on outgoing messages. Other available types are UUID , UUID_STRING , and PREFIXED_UUID_STRING . Presettle policy options Presettle policy controls when a producer or consumer instance is configured to use AMQP presettled messaging semantics. jms.presettlePolicy.presettleAll If enabled, all producers and non-transacted consumers created operate in presettled mode. It is disabled by default. jms.presettlePolicy.presettleProducers If enabled, all producers operate in presettled mode. It is disabled by default. jms.presettlePolicy.presettleTopicProducers If enabled, any producer that is sending to a Topic or TemporaryTopic destination operates in presettled mode. It is disabled by default. jms.presettlePolicy.presettleQueueProducers If enabled, any producer that is sending to a Queue or TemporaryQueue destination operates in presettled mode. It is disabled by default. jms.presettlePolicy.presettleTransactedProducers If enabled, any producer that is created in a transacted Session operates in presettled mode. It is disabled by default. jms.presettlePolicy.presettleConsumers If enabled, all consumers operate in presettled mode. It is disabled by default. jms.presettlePolicy.presettleTopicConsumers If enabled, any consumer that is receiving from a Topic or TemporaryTopic destination operates in presettled mode. It is disabled by default. jms.presettlePolicy.presettleQueueConsumers If enabled, any consumer that is receiving from a Queue or TemporaryQueue destination operates in presettled mode. It is disabled by default. Deserialization policy options Deserialization policy provides a means of controlling which Java types are trusted to be deserialized from the object stream while retrieving the body from an incoming ObjectMessage composed of serialized Java Object content. By default all types are trusted during an attempt to deserialize the body. The default deserialization policy provides URI options that allow specifying a whitelist and a blacklist of Java class or package names. jms.deserializationPolicy.whiteList A comma-separated list of class and package names that should be allowed when deserializing the contents of an ObjectMessage , unless overridden by blackList . The names in this list are not pattern values. The exact class or package name must be configured, as in java.util.Map or java.util . Package matches include sub-packages. The default is to allow all. jms.deserializationPolicy.blackList A comma-separated list of class and package names that should be rejected when deserializing the contents of a ObjectMessage . The names in this list are not pattern values. The exact class or package name must be configured, as in java.util.Map or java.util . Package matches include sub-packages. The default is to prevent none. 5.2. TCP options When connected to a remote server using plain TCP, the following options specify the behavior of the underlying socket. These options are appended to the connection URI along with any other configuration options. Example: A connection URI with transport options The complete set of TCP transport options is listed below. transport.sendBufferSize The send buffer size in bytes. The default is 65536 (64 KiB). transport.receiveBufferSize The receive buffer size in bytes. The default is 65536 (64 KiB). transport.trafficClass The default is 0. transport.connectTimeout The default is 60 seconds. transport.soTimeout The default is -1. transport.soLinger The default is -1. transport.tcpKeepAlive The default is false. transport.tcpNoDelay If enabled, do not delay and buffer TCP sends. It is enabled by default. transport.useEpoll When available, use the native epoll IO layer instead of the NIO layer. This can improve performance. It is enabled by default. 5.3. SSL/TLS options The SSL/TLS transport is enabled by using the amqps URI scheme. Because the SSL/TLS transport extends the functionality of the TCP-based transport, all of the TCP transport options are valid on an SSL/TLS transport URI. Example: A simple SSL/TLS connection URI The complete set of SSL/TLS transport options is listed below. transport.keyStoreLocation The path to the SSL/TLS key store. If unset, the value of the javax.net.ssl.keyStore system property is used. transport.keyStorePassword The password for the SSL/TLS key store. If unset, the value of the javax.net.ssl.keyStorePassword system property is used. transport.trustStoreLocation The path to the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStore system property is used. transport.trustStorePassword The password for the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStorePassword system property is used. transport.keyStoreType If unset, the value of the javax.net.ssl.keyStoreType system property is used. If the system property is unset, the default is JKS . transport.trustStoreType If unset, the value of the javax.net.ssl.trustStoreType system property is used. If the system property is unset, the default is JKS . transport.storeType Sets both keyStoreType and trustStoreType to the same value. If unset, keyStoreType and trustStoreType default to the values specified above. transport.contextProtocol The protocol argument used when getting an SSLContext. The default is TLS , or TLSv1.2 if using OpenSSL. transport.enabledCipherSuites A comma-separated list of cipher suites to enable. If unset, the context-default ciphers are used. Any disabled ciphers are removed from this list. transport.disabledCipherSuites A comma-separated list of cipher suites to disable. Ciphers listed here are removed from the enabled ciphers. transport.enabledProtocols A comma-separated list of protocols to enable. If unset, the context-default protocols are used. Any disabled protocols are removed from this list. transport.disabledProtocols A comma-separated list of protocols to disable. Protocols listed here are removed from the enabled protocol list. The default is SSLv2Hello,SSLv3 . transport.trustAll If enabled, trust the provided server certificate implicitly, regardless of any configured trust store. It is disabled by default. transport.verifyHost If enabled, verify that the connection hostname matches the provided server certificate. It is enabled by default. transport.keyAlias The alias to use when selecting a key pair from the key store if required to send a client certificate to the server. transport.useOpenSSL If enabled, use native OpenSSL libraries for SSL/TLS connections if available. It is disabled by default. For more information, see Section 7.1, "Enabling OpenSSL support" . 5.4. AMQP options The following options apply to aspects of behavior related to the AMQP wire protocol. amqp.idleTimeout The time in milliseconds after which the connection is failed if the peer sends no AMQP frames. The default is 60000 (1 minute). amqp.vhost The virtual host to connect to. This is used to populate the SASL and AMQP hostname fields. The default is the main hostname from the connection URI. amqp.saslLayer If enabled, SASL is used when establishing connections. It is enabled by default. amqp.saslMechanisms A comma-separated list of SASL mechanisms the client should allow selection of, if offered by the server and usable with the configured credentials. The supported mechanisms are EXTERNAL, SCRAM-SHA-256, SCRAM-SHA-1, CRAM-MD5, PLAIN, ANONYMOUS, and GSSAPI for Kerberos. The default is to allow selection from all mechanisms except GSSAPI, which must be explicitly included here to enable. amqp.maxFrameSize The maximum AMQP frame size in bytes allowed by the client. This value is advertised to the remote peer. The default is 1048576 (1 MiB). amqp.drainTimeout The time in milliseconds that the client waits for a response from the remote peer when a consumer drain request is made. If no response is seen in the allotted timeout period, the link is considered failed and the associated consumer is closed. The default is 60000 (1 minute). amqp.allowNonSecureRedirects If enabled, allow AMQP redirects to alternative hosts when the existing connection is secure and the alternative connection is not. For example, if enabled this would permit redirecting an SSL/TLS connection to a raw TCP connection. It is disabled by default. 5.5. Failover options Failover URIs start with the prefix failover: and contain a comma-separated list of connection URIs inside parentheses. Additional options are specified at the end. Options prefixed with jms. are applied to the overall failover URI, outside of parentheses, and affect the Connection object for its lifetime. Example: A failover URI with failover options The individual broker details within the parentheses can use the transport. or amqp. options defined earlier. These are applied as each host is connected to. Example: A failover URI with per-connection transport and AMQP options All of the configuration options for failover are listed below. failover.initialReconnectDelay The time in milliseconds the client waits before the first attempt to reconnect to a remote peer. The default is 0, meaning the first attempt happens immediately. failover.reconnectDelay The time in milliseconds between reconnection attempts. If the backoff option is not enabled, this value remains constant. The default is 10. failover.maxReconnectDelay The maximum time that the client waits before attempting to reconnect. This value is only used when the backoff feature is enabled to ensure that the delay does not grow too large. The default is 30 seconds. failover.useReconnectBackOff If enabled, the time between reconnection attempts grows based on a configured multiplier. It is enabled by default. failover.reconnectBackOffMultiplier The multiplier used to grow the reconnection delay value. The default is 2.0. failover.maxReconnectAttempts The number of reconnection attempts allowed before reporting the connection as failed to the client. The default is -1, meaning no limit. failover.startupMaxReconnectAttempts For a client that has never connected to a remote peer before, this option controls how many attempts are made to connect before reporting the connection as failed. If unset, the value of maxReconnectAttempts is used. failover.warnAfterReconnectAttempts The number of failed reconnection attempts until a warning is logged. The default is 10. failover.randomize If enabled, the set of failover URIs is randomly shuffled before attempting to connect to one of them. This can help to distribute client connections more evenly across multiple remote peers. It is disabled by default. failover.amqpOpenServerListAction Controls how the failover transport behaves when the connection "open" frame from the server provides a list of failover hosts to the client. Valid values are REPLACE , ADD , or IGNORE . If REPLACE is configured, all failover URIs other than the one for the current server are replaced with those provided by the server. If ADD is configured, the URIs provided by the server are added to the existing set of failover URIs, with deduplication. If IGNORE is configured, any updates from the server are ignored and no changes are made to the set of failover URIs in use. The default is REPLACE . The failover URI also supports defining nested options as a means of specifying AMQP and transport option values applicable to all the individual nested broker URIs. This is accomplished using the same transport. and amqp. URI options outlined earlier for a non-failover broker URI but prefixed with failover.nested. . For example, to apply the same value for the amqp.vhost option to every broker connected to you might have a URI like the following: Example: A failover URI with shared transport and AMQP options 5.6. Discovery options The client has an optional discovery module that provides a customized failover layer where the broker URIs to connect to are not given in the initial URI but instead are discovered by interacting with a discovery agent. There are currently two discovery agent implementations: a file watcher that loads URIs from a file and a multicast listener that works with ActiveMQ 5.x brokers that are configured to broadcast their broker addresses for listening clients. The general set of failover-related options when using discovery are the same as those detailed earlier, with the main prefix changed from failover. to discovery. , and with the nested prefix used to supply URI options common to all the discovered broker URIs. For example, without the agent URI details, a general discovery URI might look like the following: Example: A discovery URI To use the file watcher discovery agent, create an agent URI like the following: Example: A discovery URI using the file watcher agent The URI options for the file watcher discovery agent are listed below. updateInterval The time in milliseconds between checks for file changes. The default is 30000 (30 seconds). To use the multicast discovery agent with an ActiveMQ 5.x broker, create an agent URI like the following: Example: A discovery URI using the multicast listener agent Note that the use of default as the host in the multicast agent URI above is a special value that is substituted by the agent with the default 239.255.2.3:6155 . You can change this to specify the actual IP address and port in use with your multicast configuration. The URI option for the multicast discovery agent is listed below. group The multicast group used to listen for updates. The default is default . | [
"amqp://localhost:5672?jms.clientID=foo&transport.connectTimeout=30000",
"amqps://myhost.mydomain:5671",
"failover:(amqp://host1:5672,amqp://host2:5672)?jms.clientID=foo&failover.maxReconnectAttempts=20",
"failover:(amqp://host1:5672?amqp.option=value,amqp://host2:5672?transport.option=value)?jms.clientID=foo",
"failover:(amqp://host1:5672,amqp://host2:5672)?jms.clientID=foo&failover.nested.amqp.vhost=myhost",
"discovery:(<agent-uri>)?discovery.maxReconnectAttempts=20&discovery.discovered.jms.clientID=foo",
"discovery:(file:///path/to/monitored-file?updateInterval=60000)",
"discovery:(multicast://default?group=default)"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_client/configuration_options |
Chapter 5. Working with nodes | Chapter 5. Working with nodes 5.1. Viewing and listing the nodes in your OpenShift Container Platform cluster You can list all the nodes in your cluster to obtain information such as status, age, memory usage, and details about the nodes. When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks. 5.1.1. About listing all the nodes in a cluster You can get detailed information on the nodes in the cluster. The following command lists all nodes: USD oc get nodes The following example is a cluster with healthy nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.20.0 node1.example.com Ready worker 7h v1.20.0 node2.example.com Ready worker 7h v1.20.0 The following example is a cluster with one unhealthy node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.20.0 node1.example.com NotReady,SchedulingDisabled worker 7h v1.20.0 node2.example.com Ready worker 7h v1.20.0 The conditions that trigger a NotReady status are shown later in this section. The -o wide option provides additional information on nodes. USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.20.0+39c0afe 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.21.0-30.rhaos4.8.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.20.0+39c0afe 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.21.0-30.rhaos4.8.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.20.0+39c0afe 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.21.0-30.rhaos4.8.gitf2f339d.el8-dev The following command lists information about a single node: USD oc get node <node> For example: USD oc get node node1.example.com Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.20.0 The following command provides more detailed information about a specific node, including the reason for the current condition: USD oc describe node <node> For example: USD oc describe node node1.example.com Example output Name: node1.example.com 1 Roles: worker 2 Labels: beta.kubernetes.io/arch=amd64 3 beta.kubernetes.io/instance-type=m4.large beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-east-2 failure-domain.beta.kubernetes.io/zone=us-east-2a kubernetes.io/hostname=ip-10-0-140-16 node-role.kubernetes.io/worker= Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.16.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.20.0 Kube-Proxy Version: v1.20.0 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (13 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring grafana-78765ddcc7-hnjmm 100m (6%) 200m (13%) 100Mi (1%) 200Mi (2%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. ... 1 The name of the node. 2 The role of the node, either master or worker . 3 The labels applied to the node. 4 The annotations applied to the node. 5 The taints applied to the node. 6 The node conditions and status. The conditions stanza lists the Ready , PIDPressure , PIDPressure , MemoryPressure , DiskPressure and OutOfDisk status. These condition are described later in this section. 7 The IP address and hostname of the node. 8 The pod resources and allocatable resources. 9 Information about the node host. 10 The pods on the node. 11 The events reported by the node. Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section: Table 5.1. Node Conditions Condition Description Ready If true , the node is healthy and ready to accept pods. If false , the node is not healthy and is not accepting pods. If unknown , the node controller has not received a heartbeat from the node for the node-monitor-grace-period (the default is 40 seconds). DiskPressure If true , the disk capacity is low. MemoryPressure If true , the node memory is low. PIDPressure If true , there are too many processes on the node. OutOfDisk If true , the node has insufficient free space on the node for adding new pods. NetworkUnavailable If true , the network for the node is not correctly configured. NotReady If true , one of the underlying components, such as the container runtime or network, is experiencing issues or is not yet configured. SchedulingDisabled Pods cannot be scheduled for placement on the node. 5.1.2. Listing pods on a node in your cluster You can list all the pods on a specific node. Procedure To list all or selected pods on one or more nodes: USD oc describe node <node1> <node2> For example: USD oc describe node ip-10-0-128-218.ec2.internal To list all or selected pods on selected nodes: USD oc describe --selector=<node_selector> USD oc describe node --selector=kubernetes.io/os Or: USD oc describe -l=<pod_selector> USD oc describe node -l node-role.kubernetes.io/worker To list all pods on a specific node, including terminated pods: USD oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename> 5.1.3. Viewing memory and CPU usage statistics on your nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72% To view the usage statistics for nodes with labels: USD oc adm top node --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . 5.2. Working with nodes As an administrator, you can perform a number of tasks to make your clusters more efficient. 5.2.1. Understanding how to evacuate pods on nodes Evacuating pods allows you to migrate all or selected pods from a given node or nodes. You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. Procedure Mark the nodes unschedulable before performing the pod evacuation. Mark the node as unschedulable: USD oc adm cordon <node1> Example output node/<node1> cordoned Check that the node status is Ready,SchedulingDisabled : USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.24.0 Evacuate the pods using one of the following methods: Evacuate all or selected pods on one or more nodes: USD oc adm drain <node1> <node2> [--pod-selector=<pod_selector>] Force the deletion of bare pods using the --force option. When set to true , deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set: USD oc adm drain <node1> <node2> --force=true Set a period of time in seconds for each pod to terminate gracefully, use --grace-period . If negative, the default value specified in the pod will be used: USD oc adm drain <node1> <node2> --grace-period=-1 Ignore pods managed by daemon sets using the --ignore-daemonsets flag set to true : USD oc adm drain <node1> <node2> --ignore-daemonsets=true Set the length of time to wait before giving up using the --timeout flag. A value of 0 sets an infinite length of time: USD oc adm drain <node1> <node2> --timeout=5s Delete pods even if there are pods using emptyDir volumes by setting the --delete-emptydir-data flag to true . Local data is deleted when the node is drained: USD oc adm drain <node1> <node2> --delete-emptydir-data=true List objects that will be migrated without actually performing the evacuation, using the --dry-run option set to true : USD oc adm drain <node1> <node2> --dry-run=true Instead of specifying specific node names (for example, <node1> <node2> ), you can use the --selector=<node_selector> option to evacuate pods on selected nodes. Mark the node as schedulable when done. USD oc adm uncordon <node1> 5.2.2. Understanding how to update labels on nodes You can update any label on a node. Node labels are not persisted after a node is deleted even if the node is backed up by a Machine. Note Any change to a MachineSet object is not applied to existing machines owned by the machine set. For example, labels edited or added to an existing MachineSet object are not propagated to existing machines and nodes associated with the machine set. The following command adds or updates labels on a node: USD oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n> For example: USD oc label nodes webconsole-7f7f6 unhealthy=true The following command updates all pods in the namespace: USD oc label pods --all <key_1>=<value_1> For example: USD oc label pods --all status=unhealthy 5.2.3. Understanding how to mark nodes as unschedulable or schedulable By default, healthy nodes with a Ready status are marked as schedulable, meaning that new pods are allowed for placement on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected. The following command marks a node or nodes as unschedulable: Example output USD oc adm cordon <node> For example: USD oc adm cordon node1.example.com Example output node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled The following command marks a currently unschedulable node or nodes as schedulable: USD oc adm uncordon <node1> Alternatively, instead of specifying specific node names (for example, <node> ), you can use the --selector=<node_selector> option to mark selected nodes as schedulable or unschedulable. 5.2.4. Configuring control plane nodes as schedulable You can configure control plane nodes (also known as the master nodes) to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, control plane nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. Note You can deploy OpenShift Container Platform with no worker nodes on a bare metal cluster. In this case, the control plane nodes are marked schedulable by default. You can allow or disallow control plane nodes to be schedulable by configuring the mastersSchedulable field. + Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Procedure Edit the schedulers.config.openshift.io resource. USD oc edit schedulers.config.openshift.io cluster Configure the mastersSchedulable field. apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: "2019-09-10T03:04:05Z" generation: 1 name: cluster resourceVersion: "433" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 policy: name: "" status: {} 1 Set to true to allow control plane nodes to be schedulable, or false to disallow control plane nodes to be schedulable. Save the file to apply the changes. 5.2.5. Deleting nodes 5.2.5.1. Deleting nodes from a cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure To delete a node from the OpenShift Container Platform cluster, edit the appropriate MachineSet object: Note If you are running cluster on bare metal, you cannot delete a node by editing MachineSet objects. Machine sets are only available when a cluster is integrated with a cloud provider. Instead you must unschedule and drain the node before manually deleting it. View the machine sets that are in the cluster: USD oc get machinesets -n openshift-machine-api The machine sets are listed in the form of <clusterid>-worker-<aws-region-az>. Scale the machine set: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api For more information on scaling your cluster using a machine set, see Manually scaling a machine set . 5.2.5.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 5.2.6. Setting SELinux booleans OpenShift Container Platform allows you to enable and disable an SELinux boolean on a Red Hat Enterprise Linux CoreOS (RHCOS) node. The following procedure explains how to modify SELinux booleans on nodes using the Machine Config Operator (MCO). This procedure uses container_manage_cgroup as the example boolean. You can modify this value to whichever boolean you need. Prerequisites You have installed the OpenShift CLI (oc). Procedure Create a new YAML file with a MachineConfig object, displayed in the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 2.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service Create the new MachineConfig object by running the following command: USD oc create -f 99-worker-setsebool.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. 5.2.7. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: config: ignition: version: 3.2.0 kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 5.2.8. Additional resources For more information on scaling your cluster using a MachineSet, see Manually scaling a MachineSet . 5.3. Managing nodes OpenShift Container Platform uses a KubeletConfig custom resource (CR) to manage the configuration of nodes. By creating an instance of a KubeletConfig object, a managed machine config is created to override setting on the node. Note Logging in to remote machines for the purpose of changing their configuration is not supported. 5.3.1. Modifying nodes To make configuration changes to a cluster, or machine pool, you must create a custom resource definition (CRD), or kubeletConfig object. OpenShift Container Platform uses the Machine Config Controller to watch for changes introduced through the CRD to apply the changes to the cluster. Note Because the fields in a kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the validation of those fields is handled directly by the kubelet itself. Please refer to the relevant Kubernetes documentation for the valid values for these fields. Invalid values in the kubeletConfig object can render cluster nodes unusable. Procedure Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure. Perform one of the following steps: Check current labels of the desired machine config pool. For example: USD oc get machineconfigpool --show-labels Example output NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False Add a custom label to the desired machine config pool. For example: USD oc label machineconfigpool worker custom-kubelet=enabled Create a kubeletconfig custom resource (CR) for your configuration change. For example: Sample configuration for a custom-config CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi 1 Assign a name to CR. 2 Specify the label to apply the configuration change, this is the label you added to the machine config pool. 3 Specify the new value(s) you want to change. Create the CR object. USD oc create -f <file-name> For example: USD oc create -f master-kube-config.yaml Most Kubelet Configuration options can be set by the user. The following options are not allowed to be overwritten: CgroupDriver ClusterDNS ClusterDomain RuntimeRequestTimeout StaticPodPath 5.4. Managing the maximum number of pods per node In OpenShift Container Platform, you can configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit or both. If you use both options, the lower of the two limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization by OpenShift Container Platform. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the IP address pool. Resource overcommitting, leading to poor user application performance. Note A pod that is holding a single container actually uses two containers. The second container sets up networking prior to the actual container starting. As a result, a node running 10 pods actually has 20 containers running. The podsPerCore parameter limits the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node is 40. The maxPods parameter limits the number of pods the node can run to a fixed value, regardless of the properties of the node. 5.4.1. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 1 Assign a name to CR. 2 Specify the label to apply the configuration change. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 5.5. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. The Node Tuning Operator helps you manage node-level tuning by orchestrating the Tuned daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized Tuned daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized Tuned daemon are rolled back on an event that triggers a profile change or when the containerized Tuned daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. 5.5.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run: USD oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality might be deprecated in future versions of the Node Tuning Operator. 5.5.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of Tuned profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized Tuned daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists Tuned profiles and their names. profile: - name: tuned_profile_1 data: | # Tuned profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other Tuned daemon plugins supported by the containerized Tuned # ... - name: tuned_profile_n data: | # Tuned profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in Tuned operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized Tuned daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized Tuned daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized Tuned pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. 5.5.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - name: "openshift" data: | [main] summary=Optimize systems running OpenShift (parent profile) include=USD{f:virt_check:virtual-guest:throughput-performance} [selinux] avc_cache_threshold=8192 [net] nf_conntrack_hashsize=131072 [sysctl] net.ipv4.ip_forward=1 kernel.pid_max=>4194304 net.netfilter.nf_conntrack_max=1048576 net.ipv4.conf.all.arp_announce=2 net.ipv4.neigh.default.gc_thresh1=8192 net.ipv4.neigh.default.gc_thresh2=32768 net.ipv4.neigh.default.gc_thresh3=65536 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 vm.max_map_count=262144 [sysfs] /sys/module/nvme_core/parameters/io_timeout=4294967295 /sys/module/nvme_core/parameters/max_retries=10 - name: "openshift-control-plane" data: | [main] summary=Optimize systems running OpenShift control plane include=openshift [sysctl] # ktune sysctl settings, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns=10000000 # The total time the scheduler will consider a migrated process # "cache hot" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns=5000000 # SCHED_OTHER wake-up granularity. # # Preemption granularity when tasks wake up. Lower the value to # improve wake-up latency and throughput for latency critical tasks. kernel.sched_wakeup_granularity_ns=4000000 - name: "openshift-node" data: | [main] summary=Optimize systems running OpenShift nodes include=openshift [sysctl] net.ipv4.tcp_fastopen=3 fs.inotify.max_user_watches=65536 fs.inotify.max_user_instances=8192 recommend: - profile: "openshift-control-plane" priority: 30 match: - label: "node-role.kubernetes.io/master" - label: "node-role.kubernetes.io/infra" - profile: "openshift-node" priority: 40 5.5.4. Supported Tuned daemon plug-ins Excluding the [main] section, the following Tuned plug-ins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm There is some dynamic tuning functionality provided by some of these plug-ins that is not supported. The following Tuned plug-ins are currently not supported: bootloader script systemd See Available Tuned Plug-ins and Getting Started with Tuned for more information. 5.6. Understanding node rebooting To reboot a node without causing an outage for applications running on the platform, it is important to first evacuate the pods. For pods that are made highly available by the routing tier, nothing else needs to be done. For other pods needing storage, typically databases, it is critical to ensure that they can remain in operation with one pod temporarily going offline. While implementing resiliency for stateful pods is different for each application, in all cases it is important to configure the scheduler to use node anti-affinity to ensure that the pods are properly spread across available nodes. Another challenge is how to handle nodes that are running critical infrastructure such as the router or the registry. The same node evacuation process applies, though it is important to understand certain edge cases. 5.6.1. About rebooting nodes running critical infrastructure When rebooting nodes that host critical OpenShift Container Platform infrastructure components, such as router pods, registry pods, and monitoring pods, ensure that there are at least three nodes available to run these components. The following scenario demonstrates how service interruptions can occur with applications running on OpenShift Container Platform when only two nodes are available: Node A is marked unschedulable and all pods are evacuated. The registry pod running on that node is now redeployed on node B. Node B is now running both registry pods. Node B is now marked unschedulable and is evacuated. The service exposing the two pod endpoints on node B loses all endpoints, for a brief period of time, until they are redeployed to node A. When using three nodes for infrastructure components, this process does not result in a service disruption. However, due to pod scheduling, the last node that is evacuated and brought back into rotation does not have a registry pod. One of the other nodes has two registry pods. To schedule the third registry pod on the last node, use pod anti-affinity to prevent the scheduler from locating two registry pods on the same node. Additional information For more information on pod anti-affinity, see Placing pods relative to other pods using affinity and anti-affinity rules . 5.6.2. Rebooting a node using pod anti-affinity Pod anti-affinity is slightly different than node anti-affinity. Node anti-affinity can be violated if there are no other suitable locations to deploy a pod. Pod anti-affinity can be set to either required or preferred. With this in place, if only two infrastructure nodes are available and one is rebooted, the container image registry pod is prevented from running on the other node. oc get pods reports the pod as unready until a suitable node is available. Once a node is available and all pods are back in ready state, the node can be restarted. Procedure To reboot a node using pod anti-affinity: Edit the node specification to configure pod anti-affinity: apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . This example assumes the container image registry pod has a label of registry=default . Pod anti-affinity can use any Kubernetes match expression. Enable the MatchInterPodAffinity scheduler predicate in the scheduling policy file. Perform a graceful restart of the node. 5.6.3. Understanding how to reboot nodes running routers In most cases, a pod running an OpenShift Container Platform router exposes a host port. The PodFitsPorts scheduler predicate ensures that no router pods using the same port can run on the same node, and pod anti-affinity is achieved. If the routers are relying on IP failover for high availability, there is nothing else that is needed. For router pods relying on an external service such as AWS Elastic Load Balancing for high availability, it is that service's responsibility to react to router pod restarts. In rare cases, a router pod may not have a host port configured. In those cases, it is important to follow the recommended restart process for infrastructure nodes. 5.6.4. Rebooting a node gracefully Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction Access the node in debug mode: USD oc debug node/<node1> Change your root directory to the host: USD chroot /host Restart the node: USD systemctl reboot In a moment, the node enters the NotReady state. After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional information For information on etcd data backup, see Backing up etcd data . 5.7. Freeing node resources using garbage collection As an administrator, you can use OpenShift Container Platform to ensure that your nodes are running efficiently by freeing up resources through garbage collection. The OpenShift Container Platform node performs two types of garbage collection: Container garbage collection: Removes terminated containers. Image garbage collection: Removes images not referenced by any running pods. 5.7.1. Understanding how terminated containers are removed through garbage collection Container garbage collection can be performed using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 5.2. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the eviction-pressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. 5.7.2. Understanding how images are removed through garbage collection Image garbage collection relies on disk usage as reported by cAdvisor on the node to decide which images to remove from the node. The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 5.3. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 5.7.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: custom-kubelet=small-pods 1 1 If a label has been added it appears under Labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 1 Name for the object. 2 Selector label. 3 Type of eviction: evictionSoft or evictionHard . 4 Eviction thresholds based on a specific eviction trigger signal. 5 Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 The duration to wait before transitioning out of an eviction pressure condition. 8 The minimum age for an unused image before the image is removed by garbage collection. 9 The percent of disk usage (expressed as an integer) that triggers image garbage collection. 10 The percent of disk usage (expressed as an integer) that image garbage collection attempts to free. Create the object: USD oc create -f <file-name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verify that garbage collection is active. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 5.8. Allocating resources for nodes in an OpenShift Container Platform cluster To provide more reliable scheduling and minimize node resource overcommitment, reserve a portion of the CPU and memory resources for use by the underlying node components, such as kubelet and kube-proxy , and the remaining system components, such as sshd and NetworkManager . By specifying the resources to reserve, you provide the scheduler with more information about the remaining CPU and memory resources that a node has available for use by pods. 5.8.1. Understanding how to allocate resources for nodes CPU and memory resources reserved for node components in OpenShift Container Platform are based on two node settings: Setting Description kube-reserved This setting is not used with OpenShift Container Platform. Add the CPU and memory resources that you planned to reserve to the system-reserved setting. system-reserved This setting identifies the resources to reserve for the node components and system components. The default settings depend on the OpenShift Container Platform and Machine Config Operator versions. Confirm the default systemReserved parameter on the machine-config-operator repository. If a flag is not set, the defaults are used. If none of the flags are set, the allocated resource is set to the node's capacity as it was before the introduction of allocatable resources. Note Any CPUs specifically reserved using the reservedSystemCPUs parameter are not available for allocation using kube-reserved or system-reserved . 5.8.1.1. How OpenShift Container Platform computes allocated resources An allocated amount of a resource is computed based on the following formula: Note The withholding of Hard-Eviction-Thresholds from Allocatable improves system reliability because the value for Allocatable is enforced for pods at the node level. If Allocatable is negative, it is set to 0 . Each node reports the system resources that are used by the container runtime and kubelet. To simplify configuring the system-reserved parameter, view the resource use for the node by using the node summary API. The node summary is available at /api/v1/nodes/<node>/proxy/stats/summary . 5.8.1.2. How nodes enforce resource constraints The node is able to limit the total amount of resources that pods can consume based on the configured allocatable value. This feature significantly improves the reliability of the node by preventing pods from using CPU and memory resources that are needed by system services such as the container runtime and node agent. To improve node reliability, administrators should reserve resources based on a target for resource use. The node enforces resource constraints by using a new cgroup hierarchy that enforces quality of service. All pods are launched in a dedicated cgroup hierarchy that is separate from system daemons. Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in system-reserved . Enforcing system-reserved limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer. The recommendation is to enforce system-reserved only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer. 5.8.1.3. Understanding Eviction Thresholds If a node is under memory pressure, it can impact the entire node and all pods running on the node. For example, a system daemon that uses more than its reserved amount of memory can trigger an out-of-memory event. To avoid or reduce the probability of system out-of-memory events, the node provides out-of-resource handling. You can reserve some memory using the --eviction-hard flag. The node attempts to evict pods whenever memory availability on the node drops below the absolute value or percentage. If system daemons do not exist on a node, pods are limited to the memory capacity - eviction-hard . For this reason, resources set aside as a buffer for eviction before reaching out of memory conditions are not available for pods. The following is an example to illustrate the impact of node allocatable for memory: Node capacity is 32Gi --system-reserved is 3Gi --eviction-hard is set to 100Mi . For this node, the effective node allocatable value is 28.9Gi . If the node and system components use all their reservation, the memory available for pods is 28.9Gi , and kubelet evicts pods when it exceeds this threshold. If you enforce node allocatable, 28.9Gi , with top-level cgroups, then pods can never exceed 28.9Gi . Evictions are not performed unless system daemons consume more than 3.1Gi of memory. If system daemons do not use up all their reservation, with the above example, pods would face memcg OOM kills from their bounding cgroup before node evictions kick in. To better enforce QoS under this situation, the node applies the hard eviction thresholds to the top-level cgroup for all pods to be Node Allocatable + Eviction Hard Thresholds . If system daemons do not use up all their reservation, the node will evict pods whenever they consume more than 28.9Gi of memory. If eviction does not occur in time, a pod will be OOM killed if pods consume 29Gi of memory. 5.8.1.4. How the scheduler determines resource availability The scheduler uses the value of node.Status.Allocatable instead of node.Status.Capacity to decide if a node will become a candidate for pod scheduling. By default, the node will report its machine capacity as fully schedulable by the cluster. 5.8.2. Configuring allocated resources for nodes OpenShift Container Platform supports the CPU and memory resource types for allocation. The ephemeral-resource resource type is supported as well. For the cpu type, the resource quantity is specified in units of cores, such as 200m , 0.5 , or 1 . For memory and ephemeral-storage , it is specified in units of bytes, such as 200Ki , 50Mi , or 5Gi . As an administrator, you can set these using a custom resource (CR) through a set of <resource_type>=<resource_quantity> pairs (e.g., cpu=200m,memory=512Mi ). For details on the recommended system-reserved values, refer to the recommended system-reserved values . Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure. Perform one of the following steps: View the Machine Config Pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: systemReserved: cpu: 1000m memory: 1Gi 1 Assign a name to CR. 2 Specify the label from the Machine Config Pool. 5.9. Allocating specific CPUs for nodes in a cluster When using the static CPU Manager policy , you can reserve specific CPUs for use by specific nodes in your cluster. For example, on a system with 24 CPUs, you could reserve CPUs numbered 0 - 3 for the control plane allowing the compute nodes to use CPUs 4 - 23. 5.9.1. Reserving CPUs for nodes To explicitly define a list of CPUs that are reserved for specific nodes, create a KubeletConfig custom resource (CR) to define the reservedSystemCPUs parameter. This list supersedes the CPUs that might be reserved using the systemReserved and kubeReserved parameters. Procedure Obtain the label associated with the machine config pool (MCP) for the type of node you want to configure: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool ... 1 Get the MCP label. Create a YAML file for the KubeletConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: "0,1,2,3" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 1 Specify a name for the CR. 2 Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP. 3 Specify the label from the MCP. Create the CR object: USD oc create -f <file_name>.yaml Additional resources For more information on the systemReserved and kubeReserved parameters, see Allocating resources for nodes in an OpenShift Container Platform cluster . 5.10. Machine Config Daemon metrics The Machine Config Daemon is a part of the Machine Config Operator. It runs on every node in the cluster. The Machine Config Daemon manages configuration changes and updates on each of the nodes. 5.10.1. Machine Config Daemon metrics Beginning with OpenShift Container Platform 4.3, the Machine Config Daemon provides a set of metrics. These metrics can be accessed using the Prometheus Cluster Monitoring stack. The following table describes this set of metrics. Note Metrics marked with * in the *Name* and Description columns represent serious errors that might cause performance problems. Such problems might prevent updates and upgrades from proceeding. Note While some entries contain commands for getting specific logs, the most comprehensive set of logs is available using the oc adm must-gather command. Table 5.4. MCO metrics Name Format Description Notes mcd_host_os_and_version []string{"os", "version"} Shows the OS that MCD is running on, such as RHCOS or RHEL. In case of RHCOS, the version is provided. ssh_accessed counter Shows the number of successful SSH authentications into the node. The non-zero value shows that someone might have made manual changes to the node. Such changes might cause irreconcilable errors due to the differences between the state on the disk and the state defined in the machine configuration. mcd_drain* {"drain_time", "err"} Logs errors received during failed drain. * While drains might need multiple tries to succeed, terminal failed drains prevent updates from proceeding. The drain_time metric, which shows how much time the drain took, might help with troubleshooting. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_pivot_err* []string{"pivot_target", "err"} Logs errors encountered during pivot. * Pivot errors might prevent OS upgrades from proceeding. For further investigation, run this command to access the node and see all its logs: USD oc debug node/<node> - chroot /host journalctl -u pivot.service Alternatively, you can run this command to only see the logs from the machine-config-daemon container: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_state []string{"state", "reason"} State of Machine Config Daemon for the indicated node. Possible states are "Done", "Working", and "Degraded". In case of "Degraded", the reason is included. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_kubelet_state* []string{"err"} Logs kubelet health failures. * This is expected to be empty, with failure count of 0. If failure count exceeds 2, the error indicating threshold is exceeded. This indicates a possible issue with the health of the kubelet. For further investigation, run this command to access the node and see all its logs: USD oc debug node/<node> - chroot /host journalctl -u kubelet mcd_reboot_err* []string{"message", "err"} Logs the failed reboots and the corresponding errors. * This is expected to be empty, which indicates a successful reboot. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_update_state []string{"config", "err"} Logs success or failure of configuration updates and the corresponding errors. The expected value is rendered-master/rendered-worker-XXXX . If the update fails, an error is present. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon Additional resources See Monitoring overview . See the documentation on gathering data about your cluster . | [
"oc get nodes",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.20.0 node1.example.com Ready worker 7h v1.20.0 node2.example.com Ready worker 7h v1.20.0",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.20.0 node1.example.com NotReady,SchedulingDisabled worker 7h v1.20.0 node2.example.com Ready worker 7h v1.20.0",
"oc get nodes -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.20.0+39c0afe 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.21.0-30.rhaos4.8.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.20.0+39c0afe 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.21.0-30.rhaos4.8.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.20.0+39c0afe 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.21.0-30.rhaos4.8.gitf2f339d.el8-dev",
"oc get node <node>",
"oc get node node1.example.com",
"NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.20.0",
"oc describe node <node>",
"oc describe node node1.example.com",
"Name: node1.example.com 1 Roles: worker 2 Labels: beta.kubernetes.io/arch=amd64 3 beta.kubernetes.io/instance-type=m4.large beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-east-2 failure-domain.beta.kubernetes.io/zone=us-east-2a kubernetes.io/hostname=ip-10-0-140-16 node-role.kubernetes.io/worker= Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.16.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.20.0 Kube-Proxy Version: v1.20.0 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (13 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring grafana-78765ddcc7-hnjmm 100m (6%) 200m (13%) 100Mi (1%) 200Mi (2%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet.",
"oc describe node <node1> <node2>",
"oc describe node ip-10-0-128-218.ec2.internal",
"oc describe --selector=<node_selector>",
"oc describe node --selector=kubernetes.io/os",
"oc describe -l=<pod_selector>",
"oc describe node -l node-role.kubernetes.io/worker",
"oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%",
"oc adm top node --selector=''",
"oc adm cordon <node1>",
"node/<node1> cordoned",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.24.0",
"oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]",
"oc adm drain <node1> <node2> --force=true",
"oc adm drain <node1> <node2> --grace-period=-1",
"oc adm drain <node1> <node2> --ignore-daemonsets=true",
"oc adm drain <node1> <node2> --timeout=5s",
"oc adm drain <node1> <node2> --delete-emptydir-data=true",
"oc adm drain <node1> <node2> --dry-run=true",
"oc adm uncordon <node1>",
"oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>",
"oc label nodes webconsole-7f7f6 unhealthy=true",
"oc label pods --all <key_1>=<value_1>",
"oc label pods --all status=unhealthy",
"oc adm cordon <node>",
"oc adm cordon node1.example.com",
"node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled",
"oc adm uncordon <node1>",
"oc edit schedulers.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 policy: name: \"\" status: {}",
"oc get machinesets -n openshift-machine-api",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 2.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service",
"oc create -f 99-worker-setsebool.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: config: ignition: version: 3.2.0 kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"oc get machineconfigpool --show-labels",
"NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False",
"oc label machineconfigpool worker custom-kubelet=enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi",
"oc create -f <file-name>",
"oc create -f master-kube-config.yaml",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False",
"oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # Tuned profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other Tuned daemon plugins supported by the containerized Tuned - name: tuned_profile_n data: | # Tuned profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - name: \"openshift\" data: | [main] summary=Optimize systems running OpenShift (parent profile) include=USD{f:virt_check:virtual-guest:throughput-performance} [selinux] avc_cache_threshold=8192 [net] nf_conntrack_hashsize=131072 [sysctl] net.ipv4.ip_forward=1 kernel.pid_max=>4194304 net.netfilter.nf_conntrack_max=1048576 net.ipv4.conf.all.arp_announce=2 net.ipv4.neigh.default.gc_thresh1=8192 net.ipv4.neigh.default.gc_thresh2=32768 net.ipv4.neigh.default.gc_thresh3=65536 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 vm.max_map_count=262144 [sysfs] /sys/module/nvme_core/parameters/io_timeout=4294967295 /sys/module/nvme_core/parameters/max_retries=10 - name: \"openshift-control-plane\" data: | [main] summary=Optimize systems running OpenShift control plane include=openshift [sysctl] # ktune sysctl settings, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns=10000000 # The total time the scheduler will consider a migrated process # \"cache hot\" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns=5000000 # SCHED_OTHER wake-up granularity. # # Preemption granularity when tasks wake up. Lower the value to # improve wake-up latency and throughput for latency critical tasks. kernel.sched_wakeup_granularity_ns=4000000 - name: \"openshift-node\" data: | [main] summary=Optimize systems running OpenShift nodes include=openshift [sysctl] net.ipv4.tcp_fastopen=3 fs.inotify.max_user_watches=65536 fs.inotify.max_user_instances=8192 recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname",
"oc adm cordon <node1>",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data",
"error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction",
"oc debug node/<node1>",
"chroot /host",
"systemctl reboot",
"oc adm uncordon <node1>",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"Name: worker Namespace: Labels: custom-kubelet=small-pods 1",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10",
"oc create -f <file-name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: systemReserved: cpu: 1000m memory: 1Gi",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3",
"oc create -f <file_name>.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/nodes/working-with-nodes |
Chapter 6. Documentation Tools | Chapter 6. Documentation Tools Red Hat Enterprise Linux 6 offers the Doxygen tool for generating documentation from source code and for writing standalone documentation. 6.1. Doxygen Doxygen is a documentation tool that creates reference material both online in HTML and offline in Latex. It does this from a set of documented source files which makes it easy to keep the documentation consistent and correct with the source code. 6.1.1. Doxygen Supported Output and Languages Doxygen has support for output in: RTF (MS Word) PostScript Hyperlinked PDF Compressed HTML Unix man pages Doxygen supports the following programming languages: C C++ C# Objective -C IDL Java VHDL PHP Python Fortran D 6.1.2. Getting Started Doxygen uses a configuration file to determine its settings, therefore it is paramount that this be created correctly. Each project requires its own configuration file. The most painless way to create the configuration file is with the command doxygen -g config-file . This creates a template configuration file that can be easily edited. The variable config-file is the name of the configuration file. If it is committed from the command it is called Doxyfile by default. Another useful option while creating the configuration file is the use of a minus sign ( - ) as the file name. This is useful for scripting as it will cause Doxygen to attempt to read the configuration file from standard input ( stdin ). The configuration file consists of a number of variables and tags, similar to a simple Makefile. For example: TAGNAME = VALUE1 VALUE2... For the most part these can be left alone but should it be required to edit them see the configuration page of the Doxygen documentation website for an extensive explanation of all the tags available. There is also a GUI interface called doxywizard . If this is the preferred method of editing then documentation for this function can be found on the Doxywizard usage page of the Doxygen documentation website. There are eight tags that are useful to become familiar with. INPUT For small projects consisting mainly of C or C++ source and header files it is not required to change anything. However, if the project is large and consists of a source directory or tree, then assign the root directory or directories to the INPUT tag. FILE_PATTERNS File patterns (for example, *.cpp or *.h ) can be added to this tag allowing only files that match one of the patterns to be parsed. RECURSIVE Setting this to yes will allow recursive parsing of a source tree. EXCLUDE and EXCLUDE_PATTERNS These are used to further fine-tune the files that are parsed by adding file patterns to avoid. For example, to omit all test directories from a source tree, use EXCLUDE_PATTERNS = */test/* . EXTRACT_ALL When this is set to yes , doxygen will pretend that everything in the source files is documented to give an idea of how a fully documented project would look. However, warnings regarding undocumented members will not be generated in this mode; set it back to no when finished to correct this. SOURCE_BROWSER and INLINE_SOURCES By setting the SOURCE_BROWSER tag to yes doxygen will generate a cross-reference to analyze a piece of software's definition in its source files with the documentation existing about it. These sources can also be included in the documentation by setting INLINE_SOURCES to yes . 6.1.3. Running Doxygen Running doxygen config-file creates html , rtf , latex , xml , and / or man directories in whichever directory doxygen is started in, containing the documentation for the corresponding filetype. HTML OUTPUT This documentation can be viewed with a HTML browser that supports cascading style sheets (CSS), as well as DHTML and Javascript for some sections. Point the browser (for example, Mozilla, Safari, Konqueror, or Internet Explorer 6) to the index.html in the html directory. LaTeX OUTPUT Doxygen writes a Makefile into the latex directory in order to make it easy to first compile the Latex documentation. To do this, use a recent teTeX distribution. What is contained in this directory depends on whether the USE_PDFLATEX is set to no . Where this is true, typing make while in the latex directory generates refman.dvi . This can then be viewed with xdvi or converted to refman.ps by typing make ps . Note that this requires dvips . There are a number of commands that may be useful. The command make ps_2on1 prints two pages on one physical page. It is also possible to convert to a PDF if a ghostscript interpreter is installed by using the command make pdf . Another valid command is make pdf_2on1 . When doing this set PDF_HYPERLINKS and USE_PDFLATEX tags to yes as this will cause Makefile will only contain a target to build refman.pdf directly. RTF OUTPUT This file is designed to import into Microsoft Word by combining the RTF output into a single file: refman.rtf . Some information is encoded using fields but this can be shown by selecting all ( CTRL+A or Edit -> select all) and then right-click and select the toggle fields option from the drop down menu. XML OUTPUT The output into the xml directory consists of a number of files, each compound gathered by doxygen, as well as an index.xml . An XSLT script, combine.xslt , is also created that is used to combine all the XML files into a single file. Along with this, two XML schema files are created, index.xsd for the index file, and compound.xsd for the compound files, which describe the possible elements, their attributes, and how they are structured. MAN PAGE OUTPUT The documentation from the man directory can be viewed with the man program after ensuring the manpath has the correct man directory in the man path. Be aware that due to limitations with the man page format, information such as diagrams, cross-references and formulas will be lost. 6.1.4. Documenting the Sources There are three main steps to document the sources. First, ensure that EXTRACT_ALL is set to no so warnings are correctly generated and documentation is built properly. This allows doxygen to create documentation for documented members, files, classes and namespaces. There are two ways this documentation can be created: A special documentation block This comment block, containing additional marking so Doxygen knows it is part of the documentation, is in either C or C++. It consists of a brief description, or a detailed description. Both of these are optional. What is not optional, however, is the in body description. This then links together all the comment blocks found in the body of the method or function. Note While more than one brief or detailed description is allowed, this is not recommended as the order is not specified. The following will detail the ways in which a comment block can be marked as a detailed description: C-style comment block, starting with two asterisks (*) in the JavaDoc style. C-style comment block using the Qt style, consisting of an exclamation mark (!) instead of an extra asterisk. The beginning asterisks on the documentation lines can be left out in both cases if that is preferred. A blank beginning and end line in C++ also acceptable, with either three forward slashes or two forward slashes and an exclamation mark. or Alternatively, in order to make the comment blocks more visible a line of asterisks or forward slashes can be used. or Note that the two forwards slashes at the end of the normal comment block start a special comment block. There are three ways to add a brief description to documentation. To add a brief description use \brief above one of the comment blocks. This brief section ends at the end of the paragraph and any further paragraphs are the detailed descriptions. By setting JAVADOC_AUTOBRIEF to yes , the brief description will only last until the first dot followed by a space or new line. Consequentially limiting the brief description to a single sentence. This can also be used with the above mentioned three-slash comment blocks (///). The third option is to use a special C++ style comment, ensuring this does not span more than one line. or The blank line in the above example is required to separate the brief description and the detailed description, and JAVADOC_AUTOBRIEF must to be set to no . Examples of how a documented piece of C++ code using the Qt style can be found on the Doxygen documentation website It is also possible to have the documentation after members of a file, struct, union, class, or enum. To do this add a < marker in the comment block.\ Or in a Qt style as: or or For brief descriptions after a member use: or Examples of these and how the HTML is produced can be viewed on the Doxygen documentation website Documentation at other places While it is preferable to place documentation in front of the code it is documenting, at times it is only possible to put it in a different location, especially if a file is to be documented; after all it is impossible to place the documentation in front of a file. This is best avoided unless it is absolutely necessary as it can lead to some duplication of information. To do this it is important to have a structural command inside the documentation block. Structural commands start with a backslash (\) or an at-sign (@) for JavaDoc and are followed by one or more parameters. In the above example the command \class is used. This indicates that the comment block contains documentation for the class 'Test'. Others are: \struct : document a C-struct \union : document a union \enum : document an enumeration type \fn : document a function \var : document a variable, typedef, or enum value \def : document a #define \typedef : document a type definition \file : document a file \namespace : document a namespace \package : document a Java package \interface : document an IDL interface , the contents of a special documentation block is parsed before being written to the HTML and / Latex output directories. This includes: Special commands are executed. Any white space and asterisks (*) are removed. Blank lines are taken as new paragraphs. Words are linked to their corresponding documentation. Where the word is preceded by a percent sign (%) the percent sign is removed and the word remains. Where certain patterns are found in the text, links to members are created. Examples of this can be found on the automatic link generation page on the Doxygen documentation website. When the documentation is for Latex, HTML tags are interpreted and converted to Latex equivalents. A list of supported HTML tags can be found on the HTML commands page on the Doxygen documentation website. 6.1.5. Resources More information can be found on the Doxygen website. Doxygen homepage Doxygen introduction Doxygen documentation Output formats | [
"/** * ... documentation */",
"/*! * ... documentation */",
"/// /// ... documentation ///",
"//! //! ... documentation //!",
"///////////////////////////////////////////////// /// ... documentation /////////////////////////////////////////////////",
"/********************************************//** * ... documentation ***********************************************/",
"/*! \\brief brief documentation . * brief documentation . * * detailed documentation . */",
"/** Brief documentation . Detailed documentation continues * from here . */",
"/// Brief documentation . /** Detailed documentation . */",
"//! Brief documentation. //! Detailed documentation //! starts here",
"int var; /*!< detailed description after the member */",
"int var; /**< detailed description after the member */",
"int var; //!< detailed description after the member //!<",
"int var; ///< detailed description after the member ///<",
"int var; //!< brief description after the member",
"int var; ///< brief description after the member",
"/*! \\class Test \\brief A test class . A more detailed description of class . */"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/chap-documentation_tools |
5.219. openldap | 5.219. openldap 5.219.1. RHSA-2012:1151 - Low: openldap security and bug fix update Updated openldap packages that fix one security issue and one bug are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. OpenLDAP is an open source suite of LDAP (Lightweight Directory Access Protocol) applications and development tools. Security Fix CVE-2012-2668 It was found that the OpenLDAP server daemon ignored olcTLSCipherSuite settings. This resulted in the default cipher suite always being used, which could lead to weaker than expected ciphers being accepted during Transport Layer Security (TLS) negotiation with OpenLDAP clients. Bug Fix BZ# 844428 When the smbk5pwd overlay was enabled in an OpenLDAP server, and a user changed their password, the Microsoft NT LAN Manager (NTLM) and Microsoft LAN Manager (LM) hashes were not computed correctly. This led to the sambaLMPassword and sambaNTPassword attributes being updated with incorrect values, preventing the user logging in using a Windows-based client or a Samba client. With this update, the smbk5pwd overlay is linked against OpenSSL. As such, the NTLM and LM hashes are computed correctly, and password changes work as expected when using smbk5pwd. Users of OpenLDAP are advised to upgrade to these updated packages, which contain backported patches to correct these issues. After installing this update, the OpenLDAP daemons will be restarted automatically. 5.219.2. RHSA-2012:0899 - Low: openldap bug fix update Updated openldap packages that fix a security issue and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. OpenLDAP is an open-source suite of LDAP (Lightweight Directory Access Protocol) applications and development tools. LDAP is a set of protocols for accessing directory services (usually phone-book style information, but other information is possible) over the Internet, similar to the way DNS (Domain Name System) information is propagated over the Internet. The openldap package contains configuration files, libraries, and documentation for OpenLDAP. Security Fix CVE-2012-1164 A denial of service flaw was found in the way the OpenLDAP server daemon (slapd) processed certain search queries requesting only attributes and no values. In certain configurations, a remote attacker could issue a specially-crafted LDAP search query that, when processed by slapd, would cause slapd to crash due to an assertion failure. Bug Fixes BZ# 784211 When OpenLDAP was set with master-master replication and with the "unique" overlay configured on the back-end database, a server failed to synchronize after getting online. An upstream patch has been applied and the overlay no longer causes breaches in synchronization. BZ# 790687 When the OpenLDAP server was enabled on the ldaps port (636), this port could already be taken by another process using the bindresvport() call. Consequently, the slapd daemon could not bind to the ldaps port. This update adds a configuration file for the portreserve service to reserve the ldaps port and this port is now always available for slapd. BZ# 742163 When the OpenLDAP server was running with the "constraint" overlay enabled and the "count" restrictions configured, specific modify operations could cause "count" restriction violation without the overlay detecting it. Now, the count overlay has been fixed to detect such situations and the server returns the "constraint violation" error as expected. BZ# 783445 If the slapd daemon was set up with master-master replication over TLS, when started, it terminated unexpectedly with a segmentation fault due to accessing unallocated memory. This update applies a patch that copies and stores the TLS initialization parameters, until the deferred TLS initialization takes place and the crashes no longer occur in the described scenario. BZ# 796808 When an OpenLDAP server used TLS and a problem with loading the server key occurred, the server terminated unexpectedly with a segmentation fault due to accessing uninitialized memory. With this update, variables holding TLS certificate and keys are properly initialized, the server no longer crashes in the described scenario, and information about the failure is logged instead. BZ# 807363 Due to a bug in the libldap library, when a remote LDAP server responded with a referral to a client query and the referral chasing was enabled in the library on the client, a memory leak occurred in libldap. An upstream patch has been provided to address this issue and memory leaks no longer occur in the described scenario. BZ# 742023 If a client established a TLS connection to a remote server, which had a certificate issued by a commonly trusted certificate authority (CA), the server certificate was rejected because the CA certificate could not be found. Now, during the package installation, certificate database is created and a module with a trusted root CA is loaded. Trusted CAs shipped with the Mozilla NSS package are used and TLS connections to a remote server now work as expected. BZ# 784203 Under certain conditions, when the unbind operation was called and the ldap handle was destroyed, the library attempted to close the connection socket, which was already closed. Consequently, warning messages from the valgrind utility were returned. An upstream patch has been applied, additional checks before closing a connection socket have been added, and the socket in the described scenario is now closed only once with no warnings returned. BZ# 732916 Previously, description of the SASL_NOCANON option was missing under the "SASL OPTIONS" section in the ldap.conf man page. This update amends the man page. BZ# 743781 When mutually exclusive options "-w" and "-W" were passed to any OpenLDAP client tool, the tool terminated with an assertion error. Upstream patch has been applied and client tools now do not start if these options are passed on the command line together, thus preventing this bug. BZ# 745470 Previously, description of the "-o" and "-N" options was missing in man pages for OpenLDAP client tools. This update amends the man pages. BZ# 730745 When the "memberof" overlay was set up on top of the front end database, the server terminated unexpectedly with a segmentation fault if an entry was modified of deleted. With this update, the "memberof" overlay can no longer be set up on top of the front end database. Instead, it is required to be set up on top the back end database or databases. Now, the crash no longer occurs in the described scenario. BZ# 816168 When a utility from the openldap-clients package was called without a specified URL, a memory leak occurred. An upstream patch has been applied to address this issue and the bug no longer occurs in the described scenario. BZ# 818844 When connecting to a remote LDAP server with TLS enabled, while the TLS_CACERTDIR parameter was set to Mozilla NSS certificate database and the TLS_CACERT parameter was set to PEM bundle with CA certificates, certificates from the PEM bundle were not loaded. If the signing CA certificate was present only in the PEM CA bundle specified by TLS_CACERT, validation of the remote certificate failed. This update allows loading of CA certificates from the PEM bundle file if the Mozilla NSS certificate database is set up as well. As a result, the validation succeeds in the described scenario. Users of openldap are advised to upgrade to these updated packages, which fix this issue and these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/openldap |
Chapter 60. Securing passwords with a keystore | Chapter 60. Securing passwords with a keystore You can use a keystore to encrypt passwords that are used for communication between Business Central and KIE Server. You should encrypt both controller and KIE Server passwords. If Business Central and KIE Server are deployed to different application servers, then both application servers should use the keystore. Use Java Cryptography Extension KeyStore (JCEKS) for your keystore because it supports symmetric keys. Use KeyTool, which is part of the JDK installation, to create a new JCEKS. Note If KIE Server is not configured with JCEKS, KIE Server passwords are stored in system properties in plain text form. Prerequisites KIE Server is installed in Oracle WebLogic Server. A KIE Server user with the kie-server role has been created, as described in Section 56.1, "Configuring the KIE Server group and users" . Java 8 or higher is installed. Procedure To use KeyTool to create a JCEKS, enter the following command in the Java 8 home directory: USD<JAVA_HOME>/bin/keytool -importpassword -keystore <KEYSTORE_PATH> -keypass <ALIAS_KEY_PASSWORD> -alias <PASSWORD_ALIAS> -storepass <KEYSTORE_PASSWORD> -storetype JCEKS In this example, replace the following variables: <KEYSTORE_PATH> : The path where the keystore will be stored <KEYSTORE_PASSWORD> : The keystore password <ALIAS_KEY_PASSWORD> : The password used to access values stored with the alias <PASSWORD_ALIAS> : The alias of the entry to the process When prompted, enter the password for the KIE Server user that you created. Set the system properties listed in the following table: Table 60.1. System properties used to load a KIE Server JCEKS System property Placeholder Description kie.keystore.keyStoreURL <KEYSTORE_URL> URL for the JCEKS that you want to use, for example file:///home/kie/keystores/keystore.jceks kie.keystore.keyStorePwd <KEYSTORE_PWD> Password for the JCEKS kie.keystore.key.server.alias <KEY_SERVER_ALIAS> Alias of the key for REST services where the password is stored kie.keystore.key.server.pwd <KEY_SERVER_PWD> Password of the alias for REST services with the stored password kie.keystore.key.ctrl.alias <KEY_CONTROL_ALIAS> Alias of the key for default REST Process Automation Controller where the password is stored kie.keystore.key.ctrl.pwd <KEY_CONTROL_PWD> Password of the alias for default REST Process Automation Controller with the stored password Start KIE Server to verify the configuration. | [
"USD<JAVA_HOME>/bin/keytool -importpassword -keystore <KEYSTORE_PATH> -keypass <ALIAS_KEY_PASSWORD> -alias <PASSWORD_ALIAS> -storepass <KEYSTORE_PASSWORD> -storetype JCEKS"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/securing-passwords-wls-proc_kie-server-on-wls |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/getting_started_with_eclipse_temurin/proc-providing-feedback-on-redhat-documentation |
Chapter 4. View OpenShift Data Foundation Topology | Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_amazon_web_services/viewing-odf-topology_mcg-verify |
2.3. Pass-Through Authentication | 2.3. Pass-Through Authentication If your client application (web application or web service) is deployed on the same application server instance as JBoss Data Virtualization and your client application uses a security domain to handle authentication, you can configure JBoss Data Virtualization to use that same security domain. This way, the user will not have to re-authenticate in order to use JBoss Data Virtualization. In pass-through mode, Red Hat JBoss Data Virtualization looks for an authenticated subject in the calling thread context and uses it for sessioning and authorization. Procedure 2.1. Configure Pass-Through Authentication Change the Teiid security-domain name in the embedded "transport" section to the same name as your application's security domain name. (You can make this change via the CLI or by editing the standalone.xml file if you are running the product in standalone mode.). Important The security domain must be a JAAS-based LoginModule and your client application must obtain its Teiid connection using a Local Connection with the PassthroughAuthentication connection flag set to true. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/pass__through_authntication |
Chapter 7. Backing Up and Restoring Data Grid Clusters | Chapter 7. Backing Up and Restoring Data Grid Clusters Create archives of Data Grid resources that include cached entries, cache configurations, Protobuf schemas, and server scripts. You can then use the backup archives to restore Data Grid Server clusters after a restart or migration. Prerequisites Start the Data Grid CLI. Connect to a running Data Grid cluster. 7.1. Backing Up Data Grid Clusters Create backup archives in .zip format that you can download or store on Data Grid Server. Prerequisites Backup archives should reflect the most recent cluster state. For this reason you should ensure the cluster is no longer accepting write requests before you create backup archives. Procedure Create a CLI connection to Data Grid. Run the backup create command with the appropriate options, for example: Back up all resources with an automatically generated name. Back up all resources in a backup archive named example-backup . Back up all resources to the /some/server/dir path on the server. Back up only caches and cache templates. Back up named Protobuf schemas only. List available backup archives on the server. Download the example-backup archive from the server. If the backup operation is still in progress, the command waits for it to complete. Optionally delete the example-backup archive from the server. 7.2. Restoring Data Grid Clusters from Backup Archives Apply the content of backup archives to Data Grid clusters to restore them to the backed up state. Prerequisites Create a backup archive that is either local to the Data Grid CLI or stored on Data Grid Server. Ensure that the target container matches the container name in the backup archive. You cannot restore backups if the container names do not match. Procedure Create a CLI connection to Data Grid. Run the backup restore command with the appropriate options. Restore all content from a backup archive accessible on the server. Restore all content from a local backup archive. Restore only cache content from a backup archive on the server. | [
"backup create",
"backup create -n example-backup",
"backup create -d /some/server/dir",
"backup create --caches=* --templates=*",
"backup create --proto-schemas=schema1,schema2",
"backup ls",
"backup get example-backup",
"backup delete example-backup",
"backup restore /some/path/on/the/server",
"backup restore -u /some/local/path",
"backup restore /some/path/on/the/server --caches=*"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_the_data_grid_command_line_interface/backup |
Chapter 8. Troubleshooting disaster recovery | Chapter 8. Troubleshooting disaster recovery 8.1. Troubleshooting Metro-DR 8.1.1. A statefulset application stuck after failover Problem While relocating to a preferred cluster, DRPlacementControl is stuck reporting PROGRESSION as "MovingToSecondary". Previously, before Kubernetes v1.23, the Kubernetes control plane never cleaned up the PVCs created for StatefulSets. This activity was left to the cluster administrator or a software operator managing the StatefulSets. Due to this, the PVCs of the StatefulSets were left untouched when their Pods were deleted. This prevents Ramen from relocating an application to its preferred cluster. Resolution If the workload uses StatefulSets, and relocation is stuck with PROGRESSION as "MovingToSecondary", then run: For each bounded PVC for that namespace that belongs to the StatefulSet, run Once all PVCs are deleted, Volume Replication Group (VRG) transitions to secondary, and then gets deleted. Run the following command After a few seconds to a few minutes, the PROGRESSION reports "Completed" and relocation is complete. Result The workload is relocated to the preferred cluster BZ reference: [ 2118270 ] 8.1.2. DR policies protect all applications in the same namespace Problem While only a single application is selected to be used by a DR policy, all applications in the same namespace will be protected. This results in PVCs, that match the DRPlacementControl spec.pvcSelector across multiple workloads or if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions. Resolution Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line. BZ reference: [ 2128860 ] 8.1.3. During failback of an application stuck in Relocating state Problem This issue might occur after performing failover and failback of an application (all nodes or clusters are up). When performing failback, application is stuck in the Relocating state with a message of Waiting for PV restore to complete. Resolution Use S3 client or equivalent to clean up the duplicate PV objects from the s3 store. Keep only the one that has a timestamp closer to the failover or relocate time. BZ reference: [ 2120201 ] 8.1.4. Relocate or failback might be stuck in Initiating state Problem When a primary cluster is down and comes back online while the secondary goes down, relocate or failback might be stuck in the Initiating state. Resolution To avoid this situation, cut off all access from the old active hub to the managed clusters. Alternatively, you can scale down the ApplicationSet controller on the old active hub cluster either before moving workloads or when they are in the clean-up phase. On the old active hub, scale down the two deployments using the following commands: BZ reference: [ 2243804 ] 8.2. Troubleshooting Regional-DR 8.2.1. rbd-mirror daemon health is in warning state Problem There appears to be numerous cases where WARNING gets reported if mirror service ::get_mirror_service_status calls Ceph monitor to get service status for rbd-mirror . Following a network disconnection, rbd-mirror daemon health is in the warning state while the connectivity between both the managed clusters is fine. Resolution Run the following command in the toolbox and look for leader:false If you see the following in the output: leader: false It indicates that there is a daemon startup issue and the most likely root cause could be due to problems reliably connecting to the secondary cluster. Workaround: Move the rbd-mirror pod to a different node by simply deleting the pod and verify that it has been rescheduled on another node. leader: true or no output Contact Red Hat Support . BZ reference: [ 2118627 ] 8.2.2. volsync-rsync-src pod is in error state as it is unable to resolve the destination hostname Problem VolSync source pod is unable to resolve the hostname of the VolSync destination pod. The log of the VolSync Pod consistently shows an error message over an extended period of time similar to the following log snippet. Example output Resolution Restart submariner-lighthouse-agent on both nodes. 8.2.3. Cleanup and data sync for ApplicationSet workloads remain stuck after older primary managed cluster is recovered post failover Problem ApplicationSet based workload deployments to managed clusters are not garbage collected in cases when the hub cluster fails. It is recovered to a standby hub cluster, while the workload has been failed over to a surviving managed cluster. The cluster that the workload was failed over from, rejoins the new recovered standby hub. ApplicationSets that are DR protected, with a regional DRPolicy, hence starts firing the VolumeSynchronizationDelay alert. Further such DR protected workloads cannot be failed over to the peer cluster or relocated to the peer cluster as data is out of sync between the two clusters. Resolution The workaround requires that openshift-gitops operators can own the workload resources that are orphaned on the managed cluster that rejoined the hub post a failover of the workload was performed from the new recovered hub. To achieve this the following steps can be taken: Determine the Placement that is in use by the ArgoCD ApplicationSet resource on the hub cluster in the openshift-gitops namespace. Inspect the placement label value for the ApplicationSet in this field: spec.generators.clusterDecisionResource.labelSelector.matchLabels This would be the name of the Placement resource <placement-name> Ensure that there exists a PlacemenDecision for the ApplicationSet referenced Placement . This results in a single PlacementDecision that places the workload in the currently desired failover cluster. Create a new PlacementDecision for the ApplicationSet pointing to the cluster where it should be cleaned up. For example: Update the newly created PlacementDecision with a status subresource . Watch and ensure that the Application resource for the ApplicationSet has been placed on the desired cluster In the output, check if the SYNC STATUS shows as Synced and the HEALTH STATUS shows as Healthy . Delete the PlacementDecision that was created in step (3), such that ArgoCD can garbage collect the workload resources on the <managedcluster-name-to-clean-up> ApplicationSets that are DR protected, with a regional DRPolicy, stops firing the VolumeSynchronizationDelay alert. BZ reference: [ 2268594 ] 8.3. Troubleshooting 2-site stretch cluster with Arbiter 8.3.1. Recovering workload pods stuck in ContainerCreating state post zone recovery Problem After performing complete zone failure and recovery, the workload pods are sometimes stuck in ContainerCreating state with the any of the below errors: MountDevice failed to create newCsiDriverClient: driver name openshift-storage.rbd.csi.ceph.com not found in the list of registered CSI drivers MountDevice failed for volume <volume_name> : rpc error: code = Aborted desc = an operation with the given Volume ID <volume_id> already exists MountVolume.SetUp failed for volume <volume_name> : rpc error: code = Internal desc = staging path <path> for volume <volume_id> is not a mountpoint Resolution If the workload pods are stuck with any of the above mentioned errors, perform the following workarounds: For ceph-fs workload stuck in ContainerCreating : Restart the nodes where the stuck pods are scheduled Delete these stuck pods Verify that the new pods are running For ceph-rbd workload stuck in ContainerCreating that do not self recover after sometime Restart csi-rbd plugin pods in the nodes where the stuck pods are scheduled Verify that the new pods are running | [
"oc get pvc -n <namespace>",
"oc delete pvc <pvcname> -n namespace",
"oc get drpc -n <namespace> -o wide",
"oc scale deploy -n openshift-gitops-operator openshift-gitops-operator-controller-manager --replicas=0 oc scale statefulset -n openshift-gitops openshift-gitops-application-controller --replicas=0",
"rbd mirror pool status --verbose ocs-storagecluster-cephblockpool | grep 'leader:'",
"oc logs -n busybox-workloads-3-2 volsync-rsync-src-dd-io-pvc-1-p25rz",
"VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local:22 ssh: Could not resolve hostname volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local: Name or service not known",
"oc delete pod -l app=submariner-lighthouse-agent -n submariner-operator",
"oc get placementdecision -n openshift-gitops --selector cluster.open-cluster-management.io/placement=<placement-name>",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: PlacementDecision metadata: labels: cluster.open-cluster-management.io/decision-group-index: \"1\" # Typically one higher than the same value in the esisting PlacementDecision determined at step (2) cluster.open-cluster-management.io/decision-group-name: \"\" cluster.open-cluster-management.io/placement: cephfs-appset-busybox10-placement name: <placemen-name>-decision-<n> # <n> should be one higher than the existing PlacementDecision as determined in step (2) namespace: openshift-gitops",
"decision-status.yaml: status: decisions: - clusterName: <managedcluster-name-to-clean-up> # This would be the cluster from where the workload was failed over, NOT the current workload cluster reason: FailoverCleanup",
"oc patch placementdecision -n openshift-gitops <placemen-name>-decision-<n> --patch-file=decision-status.yaml --subresource=status --type=merge",
"oc get application -n openshift-gitops <applicationset-name>-<managedcluster-name-to-clean-up>",
"oc delete placementdecision -n openshift-gitops <placemen-name>-decision-<n>"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/troubleshooting_disaster_recovery |
Chapter 3. Distributed tracing platform (Tempo) | Chapter 3. Distributed tracing platform (Tempo) 3.1. Installing Installing the distributed tracing platform (Tempo) requires the Tempo Operator and choosing which type of deployment is best for your use case: For microservices mode, deploy a TempoStack instance in a dedicated OpenShift project. For monolithic mode, deploy a TempoMonolithic instance in a dedicated OpenShift project. Important Using object storage requires setting up a supported object store and creating a secret for the object store credentials before deploying a TempoStack or TempoMonolithic instance. 3.1.1. Installing the Tempo Operator You can install the Tempo Operator by using the web console or the command line. 3.1.1.1. Installing the Tempo Operator by using the web console You can install the Tempo Operator from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Go to Operators OperatorHub and search for Tempo Operator . Select the Tempo Operator that is provided by Red Hat . Important The following selections are the default presets for this Operator: Update channel stable Installation mode All namespaces on the cluster Installed Namespace openshift-tempo-operator Update approval Automatic Select the Enable Operator recommended cluster monitoring on this Namespace checkbox. Select Install Install View Operator . Verification In the Details tab of the page of the installed Operator, under ClusterServiceVersion details , verify that the installation Status is Succeeded . 3.1.1.2. Installing the Tempo Operator by using the CLI You can install the Tempo Operator from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Create a project for the Tempo Operator by running the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: "true" name: openshift-tempo-operator EOF Create an Operator group by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF Create a subscription by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF Verification Check the Operator status by running the following command: USD oc get csv -n openshift-tempo-operator 3.1.2. Installing a TempoStack instance You can install a TempoStack instance by using the web console or the command line. 3.1.2.1. Installing a TempoStack instance by using the web console You can install a TempoStack instance from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Go to Home Projects Create Project to create a project of your choice for the TempoStack instance that you will create in a subsequent step. Go to Workloads Secrets Create From YAML to create a secret for your object storage bucket in the project that you created for the TempoStack instance. For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoStack instance. Note You can create multiple TempoStack instances in separate projects on the same cluster. Go to Operators Installed Operators . Select TempoStack Create TempoStack YAML view . In the YAML view , customize the TempoStack custom resource (CR): apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m 1 Size of the persistent volume claim for the Tempo WAL. The default is 10Gi . 2 Secret you created in step 2 for the object storage that had been set up as one of the prerequisites. 3 Value of the name in the metadata of the secret. 4 Accepted values are azure for Azure Blob Storage; gcs for Google Cloud Storage; and s3 for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Optional. Example of a TempoStack CR for AWS S3 and MinIO storage apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route 1 In this example, the object storage was set up as one of the prerequisites, and the object storage secret was created in step 2. 2 The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI. Select Create . Verification Use the Project: dropdown list to select the project of the TempoStack instance. Go to Operators Installed Operators to verify that the Status of the TempoStack instance is Condition: Ready . Go to Workloads Pods to verify that all the component pods of the TempoStack instance are running. Access the Tempo console: Go to Networking Routes and Ctrl + F to search for tempo . In the Location column, open the URL to access the Tempo console. Note The Tempo console initially shows no trace data following the Tempo console installation. 3.1.2.2. Installing a TempoStack instance by using the CLI You can install a TempoStack instance from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run the oc login command: USD oc login --username=<your_username> You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Run the following command to create a project of your choice for the TempoStack instance that you will create in a subsequent step: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF In the project that you created for the TempoStack instance, create a secret for your object storage bucket by running the following command: USD oc apply -f - << EOF <object_storage_secret> EOF For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoStack instance in the project that you created for it: Note You can create multiple TempoStack instances in separate projects on the same cluster. Customize the TempoStack custom resource (CR): apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m 1 Size of the persistent volume claim for the Tempo WAL. The default is 10Gi . 2 Secret you created in step 2 for the object storage that had been set up as one of the prerequisites. 3 Value of the name in the metadata of the secret. 4 Accepted values are azure for Azure Blob Storage; gcs for Google Cloud Storage; and s3 for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Optional. Example of a TempoStack CR for AWS S3 and MinIO storage apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route 1 In this example, the object storage was set up as one of the prerequisites, and the object storage secret was created in step 2. 2 The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI. Apply the customized CR by running the following command: USD oc apply -f - << EOF <tempostack_cr> EOF Verification Verify that the status of all TempoStack components is Running and the conditions are type: Ready by running the following command: USD oc get tempostacks.tempo.grafana.com simplest -o yaml Verify that all the TempoStack component pods are running by running the following command: USD oc get pods Access the Tempo console: Query the route details by running the following command: USD oc get route Open https://<route_from_previous_step> in a web browser. Note The Tempo console initially shows no trace data following the Tempo console installation. 3.1.3. Installing a TempoMonolithic instance Important The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install a TempoMonolithic instance by using the web console or the command line. The TempoMonolithic custom resource (CR) creates a Tempo deployment in monolithic mode. All components of the Tempo deployment, such as the compactor, distributor, ingester, querier, and query frontend, are contained in a single container. A TempoMonolithic instance supports storing traces in in-memory storage, a persistent volume, or object storage. Tempo deployment in monolithic mode is preferred for a small deployment, demonstration, testing, and as a migration path of the Red Hat OpenShift distributed tracing platform (Jaeger) all-in-one deployment. Note The monolithic deployment of Tempo does not scale horizontally. If you require horizontal scaling, use the TempoStack CR for a Tempo deployment in microservices mode. 3.1.3.1. Installing a TempoMonolithic instance by using the web console Important The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install a TempoMonolithic instance from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Home Projects Create Project to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step. Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage. Important Object storage is not included with the distributed tracing platform (Tempo) and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , or Google Cloud Storage . Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this in Workloads Secrets Create From YAML . For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoMonolithic instance: Note You can create multiple TempoMonolithic instances in separate projects on the same cluster. Go to Operators Installed Operators . Select TempoMonolithic Create TempoMonolithic YAML view . In the YAML view , customize the TempoMonolithic custom resource (CR). The following TempoMonolithic CR creates a TempoMonolithic deployment with trace ingestion over OTLP/gRPC and OTLP/HTTP, storing traces in a supported type of storage and exposing Jaeger UI via a route: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m 1 Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is pv . The accepted values for object storage are s3 , gcs , or azure , depending on the used object store type. The default value is memory for the tmpfs in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. 2 Memory size: For in-memory storage, this means the size of the tmpfs volume, where the default is 2Gi . For a persistent volume, this means the size of the persistent volume claim, where the default is 10Gi . For object storage, this means the size of the persistent volume claim for the Tempo WAL, where the default is 10Gi . 3 Optional: For object storage, the type of object storage. The accepted values are s3 , gcs , and azure , depending on the used object store type. 4 Optional: For object storage, the value of the name in the metadata of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Enables the Jaeger UI. 8 Enables creation of a route for the Jaeger UI. 9 Optional. Select Create . Verification Use the Project: dropdown list to select the project of the TempoMonolithic instance. Go to Operators Installed Operators to verify that the Status of the TempoMonolithic instance is Condition: Ready . Go to Workloads Pods to verify that the pod of the TempoMonolithic instance is running. Access the Jaeger UI: Go to Networking Routes and Ctrl + F to search for jaegerui . Note The Jaeger UI uses the tempo-<metadata_name_of_TempoMonolithic_CR>-jaegerui route. In the Location column, open the URL to access the Jaeger UI. When the pod of the TempoMonolithic instance is ready, you can send traces to the tempo-<metadata_name_of_TempoMonolithic_CR>:4317 (OTLP/gRPC) and tempo-<metadata_name_of_TempoMonolithic_CR>:4318 (OTLP/HTTP) endpoints inside the cluster. The Tempo API is available at the tempo-<metadata_name_of_TempoMonolithic_CR>:3200 endpoint inside the cluster. 3.1.3.2. Installing a TempoMonolithic instance by using the CLI Important The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install a TempoMonolithic instance from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run the oc login command: USD oc login --username=<your_username> Procedure Run the following command to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage. Important Object storage is not included with the distributed tracing platform (Tempo) and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , or Google Cloud Storage . Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this by running the following command: USD oc apply -f - << EOF <object_storage_secret> EOF For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoMonolithic instance in the project that you created for it. Tip You can create multiple TempoMonolithic instances in separate projects on the same cluster. Customize the TempoMonolithic custom resource (CR). The following TempoMonolithic CR creates a TempoMonolithic deployment with trace ingestion over OTLP/gRPC and OTLP/HTTP, storing traces in a supported type of storage and exposing Jaeger UI via a route: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m 1 Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is pv . The accepted values for object storage are s3 , gcs , or azure , depending on the used object store type. The default value is memory for the tmpfs in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. 2 Memory size: For in-memory storage, this means the size of the tmpfs volume, where the default is 2Gi . For a persistent volume, this means the size of the persistent volume claim, where the default is 10Gi . For object storage, this means the size of the persistent volume claim for the Tempo WAL, where the default is 10Gi . 3 Optional: For object storage, the type of object storage. The accepted values are s3 , gcs , and azure , depending on the used object store type. 4 Optional: For object storage, the value of the name in the metadata of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Enables the Jaeger UI. 8 Enables creation of a route for the Jaeger UI. 9 Optional. Apply the customized CR by running the following command: USD oc apply -f - << EOF <tempomonolithic_cr> EOF Verification Verify that the status of all TempoMonolithic components is Running and the conditions are type: Ready by running the following command: USD oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml Run the following command to verify that the pod of the TempoMonolithic instance is running: USD oc get pods Access the Jaeger UI: Query the route details for the tempo-<metadata_name_of_tempomonolithic_cr>-jaegerui route by running the following command: USD oc get route Open https://<route_from_previous_step> in a web browser. When the pod of the TempoMonolithic instance is ready, you can send traces to the tempo-<metadata_name_of_tempomonolithic_cr>:4317 (OTLP/gRPC) and tempo-<metadata_name_of_tempomonolithic_cr>:4318 (OTLP/HTTP) endpoints inside the cluster. The Tempo API is available at the tempo-<metadata_name_of_tempomonolithic_cr>:3200 endpoint inside the cluster. 3.1.4. Object storage setup You can use the following configuration parameters when setting up a supported object storage. Table 3.1. Required secret parameters Storage provider Secret parameters Red Hat OpenShift Data Foundation name: tempostack-dev-odf # example bucket: <bucket_name> # requires an ObjectBucketClaim endpoint: https://s3.openshift-storage.svc access_key_id: <data_foundation_access_key_id> access_key_secret: <data_foundation_access_key_secret> MinIO See MinIO Operator . name: tempostack-dev-minio # example bucket: <minio_bucket_name> # MinIO documentation endpoint: <minio_bucket_endpoint> access_key_id: <minio_access_key_id> access_key_secret: <minio_access_key_secret> Amazon S3 name: tempostack-dev-s3 # example bucket: <s3_bucket_name> # Amazon S3 documentation endpoint: <s3_bucket_endpoint> access_key_id: <s3_access_key_id> access_key_secret: <s3_access_key_secret> Amazon S3 with Security Token Service (STS) name: tempostack-dev-s3 # example bucket: <s3_bucket_name> # Amazon S3 documentation region: <s3_region> role_arn: <s3_role_arn> Microsoft Azure Blob Storage name: tempostack-dev-azure # example container: <azure_blob_storage_container_name> # Microsoft Azure documentation account_name: <azure_blob_storage_account_name> account_key: <azure_blob_storage_account_key> Google Cloud Storage on Google Cloud Platform (GCP) name: tempostack-dev-gcs # example bucketname: <google_cloud_storage_bucket_name> # requires a bucket created in a GCP project key.json: <path/to/key.json> # requires a service account in the bucket's GCP project for GCP authentication 3.1.4.1. Setting up the Amazon S3 storage with the Security Token Service You can set up the Amazon S3 storage with the Security Token Service (STS) by using the AWS Command Line Interface (AWS CLI). Important The Amazon S3 storage with the Security Token Service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have installed the latest version of the AWS CLI. Procedure Create an AWS S3 bucket. Create the following trust.json file for the AWS IAM policy that will set up a trust relationship for the AWS IAM role, created in the step, with the service account of the TempoStack instance: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}" 1 }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_PROVIDER}:sub": [ "system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}" 2 "system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend" ] } } } ] } 1 OIDC provider that you have configured on the OpenShift Container Platform. You can get the configured OIDC provider value also by running the following command: USD oc get authentication cluster -o json | jq -r '.spec.serviceAccountIssuer' | sed 's http[s]*:// ~g' . 2 Namespace in which you intend to create the TempoStack instance. Create an AWS IAM role by attaching the trust.json policy file that you created: USD aws iam create-role \ --role-name "tempo-s3-access" \ --assume-role-policy-document "file:///tmp/trust.json" \ --query Role.Arn \ --output text Attach an AWS IAM policy to the created role: USD aws iam attach-role-policy \ --role-name "tempo-s3-access" \ --policy-arn "arn:aws:iam::aws:policy/AmazonS3FullAccess" In the OpenShift Container Platform, create an object storage secret with keys as follows: apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque Additional resources AWS Identity and Access Management Documentation AWS Command Line Interface Documentation Configuring an OpenID Connect identity provider Identify AWS resources with Amazon Resource Names (ARNs) 3.1.4.2. Setting up IBM Cloud Object Storage You can set up IBM Cloud Object Storage by using the OpenShift CLI ( oc ). Prerequisites You have installed the latest version of OpenShift CLI ( oc ). For more information, see "Getting started with the OpenShift CLI" in Configure: CLI tools . You have installed the latest version of IBM Cloud Command Line Interface ( ibmcloud ). For more information, see "Getting started with the IBM Cloud CLI" in IBM Cloud Docs . You have configured IBM Cloud Object Storage. For more information, see "Choosing a plan and creating an instance" in IBM Cloud Docs . You have an IBM Cloud Platform account. You have ordered an IBM Cloud Object Storage plan. You have created an instance of IBM Cloud Object Storage. Procedure On IBM Cloud, create an object store bucket. On IBM Cloud, create a service key for connecting to the object store bucket by running the following command: USD ibmcloud resource service-key-create <tempo_bucket> Writer \ --instance-name <tempo_bucket> --parameters '{"HMAC":true}' On IBM Cloud, create a secret with the bucket credentials by running the following command: USD oc -n <namespace> create secret generic <ibm_cos_secret> \ --from-literal=bucket="<tempo_bucket>" \ --from-literal=endpoint="<ibm_bucket_endpoint>" \ --from-literal=access_key_id="<ibm_bucket_access_key>" \ --from-literal=access_key_secret="<ibm_bucket_secret_key>" On OpenShift Container Platform, create an object storage secret with keys as follows: apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque On OpenShift Container Platform, set the storage section in the TempoStack custom resource as follows: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... storage: secret: name: <ibm_cos_secret> 1 type: s3 # ... 1 Name of the secret that contains the IBM Cloud Storage access and secret keys. Additional resources Getting started with the OpenShift CLI Getting started with the IBM Cloud CLI (IBM Cloud Docs) Choosing a plan and creating an instance (IBM Cloud Docs) Getting started with IBM Cloud Object Storage: Before you begin (IBM Cloud Docs) 3.1.5. Additional resources Creating a cluster admin OperatorHub.io Accessing the web console Installing from OperatorHub using the web console Creating applications from installed Operators Getting started with the OpenShift CLI 3.2. Configuring The Tempo Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings for creating and deploying the distributed tracing platform (Tempo) resources. You can install the default configuration or modify the file. 3.2.1. Configuring back-end storage For information about configuring the back-end storage, see Understanding persistent storage and the relevant configuration section for your chosen storage option. 3.2.2. Introduction to TempoStack configuration parameters The TempoStack custom resource (CR) defines the architecture and settings for creating the distributed tracing platform (Tempo) resources. You can modify these parameters to customize your implementation to your business needs. Example TempoStack CR apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21 1 API version to use when creating the object. 2 Defines the kind of Kubernetes object to create. 3 Data that uniquely identifies the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. 4 Name of the TempoStack instance. 5 Contains all of the configuration parameters of the TempoStack instance. When a common definition for all Tempo components is required, define it in the spec section. When the definition relates to an individual component, place it in the spec.template.<component> section. 6 Storage is specified at instance deployment. See the installation page for information about storage options for the instance. 7 Defines the compute resources for the Tempo container. 8 Integer value for the number of ingesters that must acknowledge the data from the distributors before accepting a span. 9 Configuration options for retention of traces. 10 Configuration options for the Tempo distributor component. 11 Configuration options for the Tempo ingester component. 12 Configuration options for the Tempo compactor component. 13 Configuration options for the Tempo querier component. 14 Configuration options for the Tempo query-frontend component. 15 Configuration options for the Tempo gateway component. 16 Limits ingestion and query rates. 17 Defines ingestion rate limits. 18 Defines query rate limits. 19 Configures operands to handle telemetry data. 20 Configures search capabilities. 21 Defines whether or not this CR is managed by the Operator. The default value is managed . Additional resources Installing a TempoStack instance Installing a TempoMonolithic instance 3.2.3. Query configuration options Two components of the distributed tracing platform (Tempo), the querier and query frontend, manage queries. You can configure both of these components. The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at GET /querier/api/traces/<trace_id> , but it is not expected to be used directly. Queries must be sent to the query frontend. Table 3.2. Configuration parameters for the querier component Parameter Description Values nodeSelector The simple form of the node-selection constraint. type: object replicas The number of replicas to be created for the component. type: integer; format: int32 tolerations Component-specific pod tolerations. type: array The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: GET /api/traces/<trace_id> . Internally, the query frontend component splits the blockID space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries. Table 3.3. Configuration parameters for the query frontend component Parameter Description Values component Configuration of the query frontend component. type: object component.nodeSelector The simple form of the node selection constraint. type: object component.replicas The number of replicas to be created for the query frontend component. type: integer; format: int32 component.tolerations Pod tolerations specific to the query frontend component. type: array jaegerQuery The options specific to the Jaeger Query component. type: object jaegerQuery.enabled When enabled , creates the Jaeger Query component, jaegerQuery . type: boolean jaegerQuery.ingress The options for the Jaeger Query ingress. type: object jaegerQuery.ingress.annotations The annotations of the ingress object. type: object jaegerQuery.ingress.host The hostname of the ingress object. type: string jaegerQuery.ingress.ingressClassName The name of an IngressClass cluster resource. Defines which ingress controller serves this ingress resource. type: string jaegerQuery.ingress.route The options for the OpenShift route. type: object jaegerQuery.ingress.route.termination The termination type. The default is edge . type: string (enum: insecure, edge, passthrough, reencrypt) jaegerQuery.ingress.type The type of ingress for the Jaeger Query UI. The supported types are ingress , route , and none . type: string (enum: ingress, route) jaegerQuery.monitorTab The monitor tab configuration. type: object jaegerQuery.monitorTab.enabled Enables the monitor tab in the Jaeger console. The PrometheusEndpoint must be configured. type: boolean jaegerQuery.monitorTab.prometheusEndpoint The endpoint to the Prometheus instance that contains the span rate, error, and duration (RED) metrics. For example, https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 . type: string Example configuration of the query frontend component in a TempoStack CR apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route Additional resources Understanding taints and tolerations 3.2.4. Configuration of the monitor tab in Jaeger UI Trace data contains rich information, and the data is normalized across instrumented languages and frameworks. Therefore, request rate, error, and duration (RED) metrics can be extracted from traces. The metrics can be visualized in Jaeger console in the Monitor tab. The metrics are derived from spans in the OpenTelemetry Collector that are scraped from the Collector by the Prometheus deployed in the user-workload monitoring stack. The Jaeger UI queries these metrics from the Prometheus endpoint and visualizes them. 3.2.4.1. OpenTelemetry Collector configuration The OpenTelemetry Collector requires configuration of the spanmetrics connector that derives metrics from traces and exports the metrics in the Prometheus format. OpenTelemetry Collector custom resource for span RED kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: "tempo-simplest-distributor:4317" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus] 1 Creates the ServiceMonitor custom resource to enable scraping of the Prometheus exporter. 2 The Spanmetrics connector receives traces and exports metrics. 3 The OTLP receiver to receive spans in the OpenTelemetry protocol. 4 The Prometheus exporter is used to export metrics in the Prometheus format. 5 The Spanmetrics connector is configured as exporter in traces pipeline. 6 The Spanmetrics connector is configured as receiver in metrics pipeline. 3.2.4.2. Tempo configuration The TempoStack custom resource must specify the following: the Monitor tab is enabled, and the Prometheus endpoint is set to the Thanos querier service to query the data from the user-defined monitoring stack. TempoStack custom resource with the enabled Monitor tab apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: "" 3 ingress: type: route 1 Enables the monitoring tab in the Jaeger console. 2 The service name for Thanos Querier from user-workload monitoring. 3 Optional: The metrics namespace on which the Jaeger query retrieves the Prometheus metrics. Include this line only if you are using an OpenTelemetry Collector version earlier than 0.109.0. If you are using an OpenTelemetry Collector version 0.109.0 or later, omit this line. 3.2.4.3. Span RED metrics and alerting rules The metrics generated by the spanmetrics connector are usable with alerting rules. For example, for alerts about a slow service or to define service level objectives (SLOs), the connector creates a duration_bucket histogram and the calls counter metric. These metrics have labels that identify the service, API name, operation type, and other attributes. Table 3.4. Labels of the metrics created in the spanmetrics connector Label Description Values service_name Service name set by the otel_service_name environment variable. frontend span_name Name of the operation. / /customer span_kind Identifies the server, client, messaging, or internal operation. SPAN_KIND_SERVER SPAN_KIND_CLIENT SPAN_KIND_PRODUCER SPAN_KIND_CONSUMER SPAN_KIND_INTERNAL Example PrometheusRule CR that defines an alerting rule for SLO when not serving 95% of requests within 2000ms on the front-end service apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name="frontend", span_kind="SPAN_KIND_SERVER"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: "High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}" description: "{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)" 1 The expression for checking if 95% of the front-end server response time values are below 2000 ms. The time range ( [5m] ) must be at least four times the scrape interval and long enough to accommodate a change in the metric. 3.2.5. Configuring the receiver TLS The custom resource of your TempoStack or TempoMonolithic instance supports configuring the TLS for receivers by using user-provided certificates or OpenShift's service serving certificates. 3.2.5.1. Receiver TLS configuration for a TempoStack instance You can provide a TLS certificate in a secret or use the service serving certificates that are generated by OpenShift Container Platform. To provide a TLS certificate in a secret, configure it in the TempoStack custom resource. Note This feature is not supported with the enabled Tempo Gateway. TLS for receivers and using a user-provided certificate in a secret apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3 # ... 1 TLS enabled at the Tempo Distributor. 2 Secret containing a tls.key key and tls.crt certificate that you apply in advance. 3 Optional: CA in a config map to enable mutual TLS authentication (mTLS). Alternatively, you can use the service serving certificates that are generated by OpenShift Container Platform. Note Mutual TLS authentication (mTLS) is not supported with this feature. TLS for receivers and using the service serving certificates that are generated by OpenShift Container Platform apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... template: distributor: tls: enabled: true 1 # ... 1 Sufficient configuration for the TLS at the Tempo Distributor. Additional resources Understanding service serving certificates Service CA certificates 3.2.5.2. Receiver TLS configuration for a TempoMonolithic instance You can provide a TLS certificate in a secret or use the service serving certificates that are generated by OpenShift Container Platform. To provide a TLS certificate in a secret, configure it in the TempoMonolithic custom resource. Note This feature is not supported with the enabled Tempo Gateway. TLS for receivers and using a user-provided certificate in a secret apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic # ... spec: # ... ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3 # ... 1 TLS enabled at the Tempo Distributor. 2 Secret containing a tls.key key and tls.crt certificate that you apply in advance. 3 Optional: CA in a config map to enable mutual TLS authentication (mTLS). Alternatively, you can use the service serving certificates that are generated by OpenShift Container Platform. Note Mutual TLS authentication (mTLS) is not supported with this feature. TLS for receivers and using the service serving certificates that are generated by OpenShift Container Platform apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic # ... spec: # ... ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1 # ... 1 Minimal configuration for the TLS at the Tempo Distributor. Additional resources Understanding service serving certificates Service CA certificates 3.2.6. Multitenancy Multitenancy with authentication and authorization is provided in the Tempo Gateway service. The authentication uses OpenShift OAuth and the Kubernetes TokenReview API. The authorization uses the Kubernetes SubjectAccessReview API. Note The Tempo Gateway service supports ingestion of traces only via the OTLP/gRPC. The OTLP/HTTP is not supported. Sample Tempo CR with two tenants, dev and prod apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" 4 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true 1 Must be set to openshift . 2 The list of tenants. 3 The tenant name. Must be provided in the X-Scope-OrgId header when ingesting the data. 4 A unique tenant ID. 5 Enables a gateway that performs authentication and authorization. The Jaeger UI is exposed at http://<gateway-ingress>/api/traces/v1/<tenant-name>/search . The authorization configuration uses the ClusterRole and ClusterRoleBinding of the Kubernetes Role-Based Access Control (RBAC). By default, no users have read or write permissions. Sample of the read RBAC configuration that allows authenticated users to read the trace data of the dev and prod tenants apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3 1 Lists the tenants. 2 The get value enables the read operation. 3 Grants all authenticated users the read permissions for trace data. Sample of the write RBAC configuration that allows the otel-collector service account to write the trace data for the dev tenant apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel 1 The service account name for the client to use when exporting trace data. The client must send the service account token, /var/run/secrets/kubernetes.io/serviceaccount/token , as the bearer token header. 2 Lists the tenants. 3 The create value enables the write operation. Trace data can be sent to the Tempo instance from the OpenTelemetry Collector that uses the service account with RBAC for writing the data. Sample OpenTelemetry CR configuration apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3 1 OTLP gRPC Exporter. 2 OTLP HTTP Exporter. 3 You can specify otlp/dev for the OTLP gRPC Exporter or otlphttp/dev for the OTLP HTTP Exporter. 3.2.7. Using taints and tolerations To schedule the TempoStack pods on dedicated nodes, see How to deploy the different TempoStack components on infra nodes using nodeSelector and tolerations in OpenShift 4 . 3.2.8. Configuring monitoring and alerts The Tempo Operator supports monitoring and alerts about each TempoStack component such as distributor, ingester, and so on, and exposes upgrade and operational metrics about the Operator itself. 3.2.8.1. Configuring the TempoStack metrics and alerts You can enable metrics and alerts of TempoStack instances. Prerequisites Monitoring for user-defined projects is enabled in the cluster. Procedure To enable metrics of a TempoStack instance, set the spec.observability.metrics.createServiceMonitors field to true : apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true To enable alerts for a TempoStack instance, set the spec.observability.metrics.createPrometheusRules field to true : apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets , filter for Source: User , and check that ServiceMonitors in the format tempo-<instance_name>-<component> have the Up status. To verify that alerts are set up correctly, go to Observe Alerting Alerting rules , filter for Source: User , and check that the Alert rules for the TempoStack instance components are available. Additional resources Enabling monitoring for user-defined projects 3.2.8.2. Configuring the Tempo Operator metrics and alerts When installing the Tempo Operator from the web console, you can select the Enable Operator recommended cluster monitoring on this Namespace checkbox, which enables creating metrics and alerts of the Tempo Operator. If the checkbox was not selected during installation, you can manually enable metrics and alerts even after installing the Tempo Operator. Procedure Add the openshift.io/cluster-monitoring: "true" label in the project where the Tempo Operator is installed, which is openshift-tempo-operator by default. Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets , filter for Source: Platform , and search for tempo-operator , which must have the Up status. To verify that alerts are set up correctly, go to Observe Alerting Alerting rules , filter for Source: Platform , and locate the Alert rules for the Tempo Operator . 3.3. Troubleshooting You can diagnose and fix issues in TempoStack or TempoMonolithic instances by using various troubleshooting methods. 3.3.1. Collecting diagnostic data from the command line When submitting a support case, it is helpful to include diagnostic information about your cluster to Red Hat Support. You can use the oc adm must-gather tool to gather diagnostic data for resources of various types, such as TempoStack or TempoMonolithic , and the created resources like Deployment , Pod , or ConfigMap . The oc adm must-gather tool creates a new pod that collects this data. Procedure From the directory where you want to save the collected data, run the oc adm must-gather command to collect the data: USD oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- \ /usr/bin/must-gather --operator-namespace <operator_namespace> 1 1 The default namespace where the Operator is installed is openshift-tempo-operator . Verification Verify that the new directory is created and contains the collected data. 3.4. Upgrading For version upgrades, the Tempo Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators. When the Tempo Operator is upgraded to the new version, it scans for running TempoStack instances that it manages and upgrades them to the version corresponding to the Operator's new version. 3.4.1. Additional resources Operator Lifecycle Manager concepts and resources Updating installed Operators 3.5. Removing The steps for removing the Red Hat OpenShift distributed tracing platform (Tempo) from an OpenShift Container Platform cluster are as follows: Shut down all distributed tracing platform (Tempo) pods. Remove any TempoStack instances. Remove the Tempo Operator. 3.5.1. Removing by using the web console You can remove a TempoStack instance in the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Operators Installed Operators Tempo Operator TempoStack . To remove the TempoStack instance, select Delete TempoStack Delete . Optional: Remove the Tempo Operator. 3.5.2. Removing by using the CLI You can remove a TempoStack instance on the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Get the name of the TempoStack instance by running the following command: USD oc get deployments -n <project_of_tempostack_instance> Remove the TempoStack instance by running the following command: USD oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance> Optional: Remove the Tempo Operator. Verification Run the following command to verify that the TempoStack instance is not found in the output, which indicates its successful removal: USD oc get deployments -n <project_of_tempostack_instance> 3.5.3. Additional resources Deleting Operators from a cluster Getting started with the OpenShift CLI | [
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: \"true\" name: openshift-tempo-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-tempo-operator",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc apply -f - << EOF <tempostack_cr> EOF",
"oc get tempostacks.tempo.grafana.com simplest -o yaml",
"oc get pods",
"oc get route",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc apply -f - << EOF <tempomonolithic_cr> EOF",
"oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml",
"oc get pods",
"oc get route",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}\" 2 \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend\" ] } } } ] }",
"aws iam create-role --role-name \"tempo-s3-access\" --assume-role-policy-document \"file:///tmp/trust.json\" --query Role.Arn --output text",
"aws iam attach-role-policy --role-name \"tempo-s3-access\" --policy-arn \"arn:aws:iam::aws:policy/AmazonS3FullAccess\"",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque",
"ibmcloud resource service-key-create <tempo_bucket> Writer --instance-name <tempo_bucket> --parameters '{\"HMAC\":true}'",
"oc -n <namespace> create secret generic <ibm_cos_secret> --from-literal=bucket=\"<tempo_bucket>\" --from-literal=endpoint=\"<ibm_bucket_endpoint>\" --from-literal=access_key_id=\"<ibm_bucket_access_key>\" --from-literal=access_key_secret=\"<ibm_bucket_secret_key>\"",
"apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: storage: secret: name: <ibm_cos_secret> 1 type: s3",
"apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: \"tempo-simplest-distributor:4317\" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus]",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: \"\" 3 ingress: type: route",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name=\"frontend\", span_kind=\"SPAN_KIND_SERVER\"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: \"High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}\" description: \"{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)\"",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfa\" 4 - tenantName: prod tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfb\" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true",
"oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_tempostack_instance>",
"oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>",
"oc get deployments -n <project_of_tempostack_instance>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/distributed_tracing/distributed-tracing-platform-tempo |
Chapter 2. OpenShift CLI (oc) | Chapter 2. OpenShift CLI (oc) 2.1. Getting started with the OpenShift CLI 2.1.1. About the OpenShift CLI With the OpenShift command-line interface (CLI), the oc command, you can create applications and manage OpenShift Container Platform projects from a terminal. The OpenShift CLI is ideal in the following situations: Working directly with project source code Scripting OpenShift Container Platform operations Managing projects while restricted by bandwidth resources and the web console is unavailable 2.1.2. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) either by downloading the binary or by using an RPM. 2.1.2.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.2. Installing the OpenShift CLI by using the web console You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a web console. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . 2.1.2.2.1. Installing the OpenShift CLI on Linux using the web console You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select appropriate oc binary for your Linux platform, and then click Download oc for Linux . Save the file. Unpack the archive. USD tar xvf <file> Move the oc binary to a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.2.2. Installing the OpenShift CLI on Windows using the web console You can install the OpenShift CLI ( oc ) binary on Winndows by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for Windows platform, and then click Download oc for Windows for x86_64 . Save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 2.1.2.2.3. Installing the OpenShift CLI on macOS using the web console You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for macOS platform, and then click Download oc for Mac for x86_64 . Note For macOS arm64, click Download oc for Mac for ARM 64 . Save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.3. Installing the OpenShift CLI by using an RPM For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI ( oc ) as an RPM if you have an active OpenShift Container Platform subscription on your Red Hat account. Note It is not supported to install the OpenShift CLI ( oc ) as an RPM for Red Hat Enterprise Linux (RHEL) 9. You must install the OpenShift CLI for RHEL 9 by downloading the binary. Prerequisites Must have root or sudo privileges. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.12. # subscription-manager repos --enable="rhocp-4.12-for-rhel-8-x86_64-rpms" Install the openshift-clients package: # yum install openshift-clients After you install the CLI, it is available using the oc command: USD oc <command> 2.1.2.4. Installing the OpenShift CLI by using Homebrew For macOS, you can install the OpenShift CLI ( oc ) by using the Homebrew package manager. Prerequisites You must have Homebrew ( brew ) installed. Procedure Run the following command to install the openshift-cli package: USD brew install openshift-cli 2.1.3. Logging in to the OpenShift CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). Note To access a cluster that is accessible only over an HTTP proxy server, you can set the HTTP_PROXY , HTTPS_PROXY and NO_PROXY variables. These environment variables are respected by the oc CLI so that all communication with the cluster goes through the HTTP proxy. Authentication headers are sent only when using HTTPS transport. Procedure Enter the oc login command and pass in a user name: USD oc login -u user1 When prompted, enter the required information: Example output Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started. 1 Enter the OpenShift Container Platform server URL. 2 Enter whether to use insecure connections. 3 Enter the user's password. Note If you are logged in to the web console, you can generate an oc login command that includes your token and server information. You can use the command to log in to the OpenShift Container Platform CLI without the interactive prompts. To generate the command, select Copy login command from the username drop-down menu at the top right of the web console. You can now create a project or issue other commands for managing your cluster. 2.1.4. Using the OpenShift CLI Review the following sections to learn how to complete common tasks using the CLI. 2.1.4.1. Creating a project Use the oc new-project command to create a new project. USD oc new-project my-project Example output Now using project "my-project" on server "https://openshift.example.com:6443". 2.1.4.2. Creating a new app Use the oc new-app command to create a new application. USD oc new-app https://github.com/sclorg/cakephp-ex Example output --> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php" ... Run 'oc status' to view your app. 2.1.4.3. Viewing pods Use the oc get pods command to view the pods for the current project. Note When you run oc inside a pod and do not specify a namespace, the namespace of the pod is used by default. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none> 2.1.4.4. Viewing pod logs Use the oc logs command to view logs for a particular pod. USD oc logs cakephp-ex-1-deploy Example output --> Scaling cakephp-ex-1 to 1 --> Success 2.1.4.5. Viewing the current project Use the oc project command to view the current project. USD oc project Example output Using project "my-project" on server "https://openshift.example.com:6443". 2.1.4.6. Viewing the status for the current project Use the oc status command to view information about the current project, such as services, deployments, and build configs. USD oc status Example output In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details. 2.1.4.7. Listing supported API resources Use the oc api-resources command to view the list of supported API resources on the server. USD oc api-resources Example output NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap ... 2.1.5. Getting help You can get help with CLI commands and OpenShift Container Platform resources in the following ways. Use oc help to get a list and description of all available CLI commands: Example: Get general help for the CLI USD oc help Example output OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application ... Use the --help flag to get help about a specific CLI command: Example: Get help for the oc create command USD oc create --help Example output Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags] ... Use the oc explain command to view the description and fields for a particular resource: Example: View documentation for the Pod resource USD oc explain pods Example output KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources ... 2.1.6. Logging out of the OpenShift CLI You can log out the OpenShift CLI to end your current session. Use the oc logout command. USD oc logout Example output Logged "user1" out on "https://openshift.example.com" This deletes the saved authentication token from the server and removes it from your configuration file. 2.2. Configuring the OpenShift CLI 2.2.1. Enabling tab completion You can enable tab completion for the Bash or Zsh shells. 2.2.1.1. Enabling tab completion for Bash After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Bash shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. You must have the package bash-completion installed. Procedure Save the Bash completion code to a file: USD oc completion bash > oc_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp oc_bash_completion /etc/bash_completion.d/ You can also save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 2.2.1.2. Enabling tab completion for Zsh After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Zsh shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. Procedure To add tab completion for oc to your .zshrc file, run the following command: USD cat >>~/.zshrc<<EOF if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF Tab completion is enabled when you open a new terminal. 2.3. Usage of oc and kubectl commands The Kubernetes command-line interface (CLI), kubectl , can be used to run commands against a Kubernetes cluster. Because OpenShift Container Platform is a certified Kubernetes distribution, you can use the supported kubectl binaries that ship with OpenShift Container Platform, or you can gain extended functionality by using the oc binary. 2.3.1. The oc binary The oc binary offers the same capabilities as the kubectl binary, but it extends to natively support additional OpenShift Container Platform features, including: Full support for OpenShift Container Platform resources Resources such as DeploymentConfig , BuildConfig , Route , ImageStream , and ImageStreamTag objects are specific to OpenShift Container Platform distributions, and build upon standard Kubernetes primitives. Authentication The oc binary offers a built-in login command for authentication and lets you work with OpenShift Container Platform projects, which map Kubernetes namespaces to authenticated users. Read Understanding authentication for more information. Additional commands The additional command oc new-app , for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional command oc new-project makes it easier to start a project that you can switch to as your default. Important If you installed an earlier version of the oc binary, you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. If you want the latest features, you must download and install the latest version of the oc binary corresponding to your OpenShift Container Platform server version. Non-security API changes will involve, at minimum, two minor releases (4.1 to 4.2 to 4.3, for example) to allow older oc binaries to update. Using new capabilities might require newer oc binaries. A 4.3 server might have additional capabilities that a 4.2 oc binary cannot use and a 4.3 oc binary might have additional capabilities that are unsupported by a 4.2 server. Table 2.1. Compatibility Matrix X.Y ( oc Client) X.Y+N footnote:versionpolicyn[Where N is a number greater than or equal to 1.] ( oc Client) X.Y (Server) X.Y+N footnote:versionpolicyn[] (Server) Fully compatible. oc client might not be able to access server features. oc client might provide options and features that might not be compatible with the accessed server. 2.3.2. The kubectl binary The kubectl binary is provided as a means to support existing workflows and scripts for new OpenShift Container Platform users coming from a standard Kubernetes environment, or for those who prefer to use the kubectl CLI. Existing users of kubectl can continue to use the binary to interact with Kubernetes primitives, with no changes required to the OpenShift Container Platform cluster. You can install the supported kubectl binary by following the steps to Install the OpenShift CLI . The kubectl binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM. For more information, see the kubectl documentation . 2.4. Managing CLI profiles A CLI configuration file allows you to configure different profiles, or contexts, for use with the CLI tools overview . A context consists of user authentication and OpenShift Container Platform server information associated with a nickname . 2.4.1. About switches between CLI profiles Contexts allow you to easily switch between multiple users across multiple OpenShift Container Platform servers, or clusters, when using CLI operations. Nicknames make managing CLI configurations easier by providing short-hand references to contexts, user credentials, and cluster details. After logging in with the CLI for the first time, OpenShift Container Platform creates a ~/.kube/config file if one does not already exist. As more authentication and connection details are provided to the CLI, either automatically during an oc login operation or by manually configuring CLI profiles, the updated information is stored in the configuration file: CLI config file apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k 1 The clusters section defines connection details for OpenShift Container Platform clusters, including the address for their master server. In this example, one cluster is nicknamed openshift1.example.com:8443 and another is nicknamed openshift2.example.com:8443 . 2 This contexts section defines two contexts: one nicknamed alice-project/openshift1.example.com:8443/alice , using the alice-project project, openshift1.example.com:8443 cluster, and alice user, and another nicknamed joe-project/openshift1.example.com:8443/alice , using the joe-project project, openshift1.example.com:8443 cluster and alice user. 3 The current-context parameter shows that the joe-project/openshift1.example.com:8443/alice context is currently in use, allowing the alice user to work in the joe-project project on the openshift1.example.com:8443 cluster. 4 The users section defines user credentials. In this example, the user nickname alice/openshift1.example.com:8443 uses an access token. The CLI can support multiple configuration files which are loaded at runtime and merged together along with any override options specified from the command line. After you are logged in, you can use the oc status or oc project command to verify your current working environment: Verify the current working environment USD oc status Example output oc status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example. List the current project USD oc project Example output Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443". You can run the oc login command again and supply the required information during the interactive process, to log in using any other combination of user credentials and cluster details. A context is constructed based on the supplied information if one does not already exist. If you are already logged in and want to switch to another project the current user already has access to, use the oc project command and enter the name of the project: USD oc project alice-project Example output Now using project "alice-project" on server "https://openshift1.example.com:8443". At any time, you can use the oc config view command to view your current CLI configuration, as seen in the output. Additional CLI configuration commands are also available for more advanced usage. Note If you have access to administrator credentials but are no longer logged in as the default system user system:admin , you can log back in as this user at any time as long as the credentials are still present in your CLI config file. The following command logs in and switches to the default project: USD oc login -u system:admin -n default 2.4.2. Manual configuration of CLI profiles Note This section covers more advanced usage of CLI configurations. In most situations, you can use the oc login and oc project commands to log in and switch between contexts and projects. If you want to manually configure your CLI config files, you can use the oc config command instead of directly modifying the files. The oc config command includes a number of helpful sub-commands for this purpose: Table 2.2. CLI configuration subcommands Subcommand Usage set-cluster Sets a cluster entry in the CLI config file. If the referenced cluster nickname already exists, the specified information is merged in. USD oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true] set-context Sets a context entry in the CLI config file. If the referenced context nickname already exists, the specified information is merged in. USD oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>] use-context Sets the current context using the specified context nickname. USD oc config use-context <context_nickname> set Sets an individual value in the CLI config file. USD oc config set <property_name> <property_value> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. The <property_value> is the new value being set. unset Unsets individual values in the CLI config file. USD oc config unset <property_name> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. view Displays the merged CLI configuration currently in use. USD oc config view Displays the result of the specified CLI config file. USD oc config view --config=<specific_filename> Example usage Log in as a user that uses an access token. This token is used by the alice user: USD oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 View the cluster entry automatically created: USD oc config view Example output apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 Update the current context to have users log in to the desired namespace: USD oc config set-context `oc config current-context` --namespace=<project_name> Examine the current context, to confirm that the changes are implemented: USD oc whoami -c All subsequent CLI operations uses the new context, unless otherwise specified by overriding CLI options or until the context is switched. 2.4.3. Load and merge rules You can follow these rules, when issuing CLI operations for the loading and merging order for the CLI configuration: CLI config files are retrieved from your workstation, using the following hierarchy and merge rules: If the --config option is set, then only that file is loaded. The flag is set once and no merging takes place. If the USDKUBECONFIG environment variable is set, then it is used. The variable can be a list of paths, and if so the paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. Otherwise, the ~/.kube/config file is used and no merging takes place. The context to use is determined based on the first match in the following flow: The value of the --context option. The current-context value from the CLI config file. An empty value is allowed at this stage. The user and cluster to use is determined. At this point, you may or may not have a context; they are built based on the first match in the following flow, which is run once for the user and once for the cluster: The value of the --user for user name and --cluster option for cluster name. If the --context option is present, then use the context's value. An empty value is allowed at this stage. The actual cluster information to use is determined. At this point, you may or may not have cluster information. Each piece of the cluster information is built based on the first match in the following flow: The values of any of the following command line options: --server , --api-version --certificate-authority --insecure-skip-tls-verify If cluster information and a value for the attribute is present, then use it. If you do not have a server location, then there is an error. The actual user information to use is determined. Users are built using the same rules as clusters, except that you can only have one authentication technique per user; conflicting techniques cause the operation to fail. Command line options take precedence over config file values. Valid command line options are: --auth-path --client-certificate --client-key --token For any information that is still missing, default values are used and prompts are given for additional information. 2.5. Extending the OpenShift CLI with plugins You can write and install plugins to build on the default oc commands, allowing you to perform new and more complex tasks with the OpenShift Container Platform CLI. 2.5.1. Writing CLI plugins You can write a plugin for the OpenShift Container Platform CLI in any programming language or script that allows you to write command-line commands. Note that you can not use a plugin to overwrite an existing oc command. Procedure This procedure creates a simple Bash plugin that prints a message to the terminal when the oc foo command is issued. Create a file called oc-foo . When naming your plugin file, keep the following in mind: The file must begin with oc- or kubectl- to be recognized as a plugin. The file name determines the command that invokes the plugin. For example, a plugin with the file name oc-foo-bar can be invoked by a command of oc foo bar . You can also use underscores if you want the command to contain dashes. For example, a plugin with the file name oc-foo_bar can be invoked by a command of oc foo-bar . Add the following contents to the file. #!/bin/bash # optional argument handling if [[ "USD1" == "version" ]] then echo "1.0.0" exit 0 fi # optional argument handling if [[ "USD1" == "config" ]] then echo USDKUBECONFIG exit 0 fi echo "I am a plugin named kubectl-foo" After you install this plugin for the OpenShift Container Platform CLI, it can be invoked using the oc foo command. Additional resources Review the Sample plugin repository for an example of a plugin written in Go. Review the CLI runtime repository for a set of utilities to assist in writing plugins in Go. 2.5.2. Installing and using CLI plugins After you write a custom plugin for the OpenShift Container Platform CLI, you must install it to use the functionality that it provides. Prerequisites You must have the oc CLI tool installed. You must have a CLI plugin file that begins with oc- or kubectl- . Procedure If necessary, update the plugin file to be executable. USD chmod +x <plugin_file> Place the file anywhere in your PATH , such as /usr/local/bin/ . USD sudo mv <plugin_file> /usr/local/bin/. Run oc plugin list to make sure that the plugin is listed. USD oc plugin list Example output The following compatible plugins are available: /usr/local/bin/<plugin_file> If your plugin is not listed here, verify that the file begins with oc- or kubectl- , is executable, and is on your PATH . Invoke the new command or option introduced by the plugin. For example, if you built and installed the kubectl-ns plugin from the Sample plugin repository , you can use the following command to view the current namespace. USD oc ns Note that the command to invoke the plugin depends on the plugin file name. For example, a plugin with the file name of oc-foo-bar is invoked by the oc foo bar command. 2.6. Managing CLI plugins with Krew You can use Krew to install and manage plugins for the OpenShift CLI ( oc ). Important Using Krew to install and manage plugins for the OpenShift CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.6.1. Installing a CLI plugin with Krew You can install a plugin for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. Procedure To list all available plugins, run the following command: USD oc krew search To get information about a plugin, run the following command: USD oc krew info <plugin_name> To install a plugin, run the following command: USD oc krew install <plugin_name> To list all plugins that were installed by Krew, run the following command: USD oc krew list 2.6.2. Updating a CLI plugin with Krew You can update a plugin that was installed for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. You have installed a plugin for the OpenShift CLI with Krew. Procedure To update a single plugin, run the following command: USD oc krew upgrade <plugin_name> To update all plugins that were installed by Krew, run the following command: USD oc krew upgrade 2.6.3. Uninstalling a CLI plugin with Krew You can uninstall a plugin that was installed for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. You have installed a plugin for the OpenShift CLI with Krew. Procedure To uninstall a plugin, run the following command: USD oc krew uninstall <plugin_name> 2.6.4. Additional resources Krew Extending the OpenShift CLI with plugins 2.7. OpenShift CLI developer command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) developer commands. For administrator commands, see the OpenShift CLI administrator command reference . Run oc help to list all commands or run oc <command> --help to get additional details for a specific command. 2.7.1. OpenShift CLI (oc) developer commands 2.7.1.1. oc annotate Update the annotations on a resource Example usage # Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in "pod.json" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description- 2.7.1.2. oc api-resources Print the supported API resources on the server Example usage # Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io 2.7.1.3. oc api-versions Print the supported API versions on the server, in the form of "group/version" Example usage # Print the supported API versions oc api-versions 2.7.1.4. oc apply Apply a configuration to a resource by file name or stdin Example usage # Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' - i.e. expand wildcard characters in file names oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap 2.7.1.5. oc apply edit-last-applied Edit latest last-applied-configuration annotations of a resource/object Example usage # Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json 2.7.1.6. oc apply set-last-applied Set the last-applied-configuration annotation on a live object to match the contents of a file Example usage # Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true 2.7.1.7. oc apply view-last-applied View the latest last-applied-configuration annotations of a resource/object Example usage # View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json 2.7.1.8. oc attach Attach to a running container Example usage # Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx 2.7.1.9. oc auth can-i Check whether an action is allowed Example usage # Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if I can do everything in my current namespace ("*" means all) oc auth can-i '*' '*' # Check to see if I can get the job named "bar" in namespace "foo" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace "foo" oc auth can-i --list --namespace=foo 2.7.1.10. oc auth reconcile Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects Example usage # Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml 2.7.1.11. oc autoscale Autoscale a deployment config, deployment, replica set, stateful set, or replication controller Example usage # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80 2.7.1.12. oc cancel-build Cancel running, pending, or new builds Example usage # Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new 2.7.1.13. oc cluster-info Display cluster information Example usage # Print the address of the control plane and cluster services oc cluster-info 2.7.1.14. oc cluster-info dump Dump relevant information for debugging and diagnosis Example usage # Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state 2.7.1.15. oc completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Example usage # Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf " # Kubectl shell completion source 'USDHOME/.kube/completion.bash.inc' " >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > "USD{fpath[1]}/_oc" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\.kube\completion.ps1 Add-Content USDPROFILE "USDHOME\.kube\completion.ps1" ## Execute completion code in the profile Add-Content USDPROFILE "if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE 2.7.1.16. oc config current-context Display the current-context Example usage # Display the current-context oc config current-context 2.7.1.17. oc config delete-cluster Delete the specified cluster from the kubeconfig Example usage # Delete the minikube cluster oc config delete-cluster minikube 2.7.1.18. oc config delete-context Delete the specified context from the kubeconfig Example usage # Delete the context for the minikube cluster oc config delete-context minikube 2.7.1.19. oc config delete-user Delete the specified user from the kubeconfig Example usage # Delete the minikube user oc config delete-user minikube 2.7.1.20. oc config get-clusters Display clusters defined in the kubeconfig Example usage # List the clusters that oc knows about oc config get-clusters 2.7.1.21. oc config get-contexts Describe one or many contexts Example usage # List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context 2.7.1.22. oc config get-users Display users defined in the kubeconfig Example usage # List the users that oc knows about oc config get-users 2.7.1.23. oc config rename-context Rename a context from the kubeconfig file Example usage # Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name 2.7.1.24. oc config set Set an individual value in a kubeconfig file Example usage # Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo "cert_data_here" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true 2.7.1.25. oc config set-cluster Set a cluster entry in kubeconfig Example usage # Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set proxy url for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4 2.7.1.26. oc config set-context Set a context entry in kubeconfig Example usage # Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin 2.7.1.27. oc config set-credentials Set a user entry in kubeconfig Example usage # Set only the "client-key" field on the "cluster-admin" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the "cluster-admin" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the "cluster-admin" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional args oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin args for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=var-to-remove- 2.7.1.28. oc config unset Unset an individual value in a kubeconfig file Example usage # Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace 2.7.1.29. oc config use-context Set the current-context in a kubeconfig file Example usage # Use the context for the minikube cluster oc config use-context minikube 2.7.1.30. oc config view Display merged kubeconfig settings or a specified kubeconfig file Example usage # Show merged kubeconfig settings oc config view # Show merged kubeconfig settings and raw certificate data oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' 2.7.1.31. oc cp Copy files and directories to and from containers Example usage # !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar 2.7.1.32. oc create Create a resource from a file or from stdin Example usage # Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json 2.7.1.33. oc create build Create a new build Example usage # Create a new build oc create build myapp 2.7.1.34. oc create clusterresourcequota Create a cluster resource quota Example usage # Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10 2.7.1.35. oc create clusterrole Create a cluster role Example usage # Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named "pod-reader" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named "foo" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named "foo" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name "foo" with NonResourceURL specified oc create clusterrole "foo" --verb=get --non-resource-url=/logs/* # Create a cluster role name "monitoring" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" 2.7.1.36. oc create clusterrolebinding Create a cluster role binding for a particular cluster role Example usage # Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1 2.7.1.37. oc create configmap Create a config map from a local file, directory or literal value Example usage # Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.7.1.38. oc create cronjob Create a cron job with the specified name Example usage # Create a cron job oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date 2.7.1.39. oc create deployment Create a deployment with the specified name Example usage # Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 2.7.1.40. oc create deploymentconfig Create a deployment config with default options that uses a given image Example usage # Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx 2.7.1.41. oc create identity Manually create an identity (only needed if automatic creation is disabled) Example usage # Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones" oc create identity acme_ldap:adamjones 2.7.1.42. oc create imagestream Create a new empty image stream Example usage # Create a new image stream oc create imagestream mysql 2.7.1.43. oc create imagestreamtag Create a new image stream tag Example usage # Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0 2.7.1.44. oc create ingress Create an ingress with the specified name Example usage # Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret "my-cert" oc create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert" # Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress" oc create ingress catch-all --class=otheringress --rule="/path=svc:port" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule="foo.com/bar=svc:port" \ --annotation ingress.annotation1=foo \ --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default \ --rule="foo.com/=svc:port" \ --rule="foo.com/admin/=svcadmin:portadmin" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default \ --rule="foo.com/path*=svc:8080" \ --rule="bar.com/admin*=svc2:http" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default \ --rule="foo.com/=svc:https,tls" \ --rule="foo.com/path/subpath*=othersvc:8080" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default \ --rule="foo.com/*=svc:8080,tls=secret1" # Create an ingress with a default backend oc create ingress ingdefault --class=default \ --default-backend=defaultsvc:http \ --rule="foo.com/*=svc:8080,tls=secret1" 2.7.1.45. oc create job Create a job with the specified name Example usage # Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named "a-cronjob" oc create job test-job --from=cronjob/a-cronjob 2.7.1.46. oc create namespace Create a namespace with the specified name Example usage # Create a new namespace named my-namespace oc create namespace my-namespace 2.7.1.47. oc create poddisruptionbudget Create a pod disruption budget with the specified name Example usage # Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50% 2.7.1.48. oc create priorityclass Create a priority class with the specified name Example usage # Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description="high priority" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description="default priority" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description="high priority" --preemption-policy="Never" 2.7.1.49. oc create quota Create a quota with the specified name Example usage # Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort 2.7.1.50. oc create role Create a role with single rule Example usage # Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named "pod-reader" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named "foo" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named "foo" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status 2.7.1.51. oc create rolebinding Create a role binding for a particular role or cluster role Example usage # Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 2.7.1.52. oc create route edge Create a route that uses edge TLS termination Example usage # Create an edge route named "my-route" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets 2.7.1.53. oc create route passthrough Create a route that uses passthrough TLS termination Example usage # Create a passthrough route named "my-route" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com 2.7.1.54. oc create route reencrypt Create a route that uses reencrypt TLS termination Example usage # Create a route named "my-route" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend 2.7.1.55. oc create secret docker-registry Create a secret for use with a Docker registry Example usage # If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using: oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json 2.7.1.56. oc create secret generic Create a secret from a local file, directory, or literal value Example usage # Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.7.1.57. oc create secret tls Create a TLS secret Example usage # Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key 2.7.1.58. oc create service clusterip Create a ClusterIP service Example usage # Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip="None" 2.7.1.59. oc create service externalname Create an ExternalName service Example usage # Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com 2.7.1.60. oc create service loadbalancer Create a LoadBalancer service Example usage # Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080 2.7.1.61. oc create service nodeport Create a NodePort service Example usage # Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080 2.7.1.62. oc create serviceaccount Create a service account with the specified name Example usage # Create a new service account named my-service-account oc create serviceaccount my-service-account 2.7.1.63. oc create token Request a service account token Example usage # Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific uid oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc 2.7.1.64. oc create user Manually create a user (only needed if automatic creation is disabled) Example usage # Create a user with the username "ajones" and the display name "Adam Jones" oc create user ajones --full-name="Adam Jones" 2.7.1.65. oc create useridentitymapping Manually map an identity to a user Example usage # Map the identity "acme_ldap:adamjones" to the user "ajones" oc create useridentitymapping acme_ldap:adamjones ajones 2.7.1.66. oc debug Launch a new instance of a pod for debugging Example usage # Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns 2.7.1.67. oc delete Delete resources by file names, stdin, resources and names, or by resources and label selector Example usage # Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' - i.e. expand wildcard characters in file names oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names "baz" and "foo" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all 2.7.1.68. oc describe Show details of a specific resource or group of resources Example usage # Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in "pod.json" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe po -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend 2.7.1.69. oc diff Diff the live version against a would-be applied version Example usage # Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f - 2.7.1.70. oc edit Edit a resource on the server Example usage # Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR="nano" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the deployment/mydeployment's status subresource oc edit deployment mydeployment --subresource='status' 2.7.1.71. oc exec Execute a command in a container Example usage # Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date 2.7.1.72. oc explain Get documentation for a resource Example usage # Get the documentation of the resource and its fields oc explain pods # Get the documentation of a specific field of a resource oc explain pods.spec.containers 2.7.1.73. oc expose Expose a replicated application as a service or route Example usage # Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx 2.7.1.74. oc extract Extract secrets or config maps to disk Example usage # Extract the secret "test" to the current directory oc extract secret/test # Extract the config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map "nginx" to STDOUT oc extract configmap/nginx --to=- # Extract only the key "nginx.conf" from config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf 2.7.1.75. oc get Display one or many resources Example usage # List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the "v1" version of the "apps" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in "pod.yaml" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List status subresource for a single pod. oc get pod web-pod-13je7 --subresource status 2.7.1.76. oc idle Idle scalable resources Example usage # Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt 2.7.1.77. oc image append Add layers to images and push them to a registry Example usage # Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{"Entrypoint":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: Wildcard filter is not supported with append. Pass a single os/arch to append oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz 2.7.1.78. oc image extract Copy files from an image to the file system Example usage # Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract. Pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:] 2.7.1.79. oc image info Display information about an image Example usage # Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64 2.7.1.80. oc image mirror Mirror images from one repository to another Example usage # Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable \ docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only # the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=.* 2.7.1.81. oc import-image Import images from a container image registry Example usage # Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm 2.7.1.82. oc kustomize Build a kustomization target from a directory or URL. Example usage # Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6 2.7.1.83. oc label Update the labels on a resource Example usage # Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in "pod.json" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar- 2.7.1.84. oc login Log in to a server Example usage # Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass 2.7.1.85. oc logout End the current server session Example usage # Log out oc logout 2.7.1.86. oc logs Print the logs for a container in a pod Example usage # Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container 2.7.1.87. oc new-app Create a new application Example usage # List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match "ruby" oc new-app --search ruby # Search for "ruby", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for "ruby" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml 2.7.1.88. oc new-build Create a new build configuration Example usage # Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp 2.7.1.89. oc new-project Request a new project Example usage # Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name="Web Team Development" --description="Development project for the web team." 2.7.1.90. oc observe Observe changes to resources and react to them (experimental) Example usage # Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe namespaces -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh 2.7.1.91. oc patch Update fields of a resource Example usage # Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\n unschedulable: true' # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch oc patch -f node.json -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' # Update a deployment's replicas through the scale subresource using a merge patch. oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' 2.7.1.92. oc plugin list List all visible plugin executables on a user's PATH Example usage # List all available plugins oc plugin list 2.7.1.93. oc policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1 2.7.1.94. oc policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml 2.7.1.95. oc policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml 2.7.1.96. oc port-forward Forward one or more local ports to a pod Example usage # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000 2.7.1.97. oc process Process a template into list of resources Example usage # Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f - 2.7.1.98. oc project Switch to another project Example usage # Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project 2.7.1.99. oc projects Display existing projects Example usage # List all projects oc projects 2.7.1.100. oc proxy Run a proxy to the Kubernetes API server Example usage # To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api 2.7.1.101. oc registry info Print information about the integrated registry Example usage # Display information about the integrated registry oc registry info 2.7.1.102. oc registry login Log in to the integrated registry Example usage # Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS 2.7.1.103. oc replace Replace a resource by file name or stdin Example usage # Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\(image: myimage\):.*USD/\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json 2.7.1.104. oc rollback Revert part of an application back to a deployment Example usage # Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json 2.7.1.105. oc rollout cancel Cancel the in-progress deployment Example usage # Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx 2.7.1.106. oc rollout history View rollout history Example usage # View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3 2.7.1.107. oc rollout latest Start a new rollout for a deployment config with the latest state from its triggers Example usage # Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json 2.7.1.108. oc rollout pause Mark the provided resource as paused Example usage # Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx 2.7.1.109. oc rollout restart Restart a resource Example usage # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx 2.7.1.110. oc rollout resume Resume a paused resource Example usage # Resume an already paused deployment oc rollout resume dc/nginx 2.7.1.111. oc rollout retry Retry the latest failed rollout Example usage # Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend 2.7.1.112. oc rollout status Show the status of the rollout Example usage # Watch the status of the latest rollout oc rollout status dc/nginx 2.7.1.113. oc rollout undo Undo a rollout Example usage # Roll back to the deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3 2.7.1.114. oc rsh Start a shell session in a container Example usage # Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/sheduled 2.7.1.115. oc rsync Copy files between a local file system and a pod Example usage # Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir 2.7.1.116. oc run Run a particular image on the cluster Example usage # Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container oc run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default" # Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the container oc run hazelcast --image=hazelcast/hazelcast --labels="app=hazelcast,env=prod" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN> 2.7.1.117. oc scale Set a new size for a deployment, replica set, or replication controller Example usage # Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in "foo.yaml" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/foo rc/bar rc/baz # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web 2.7.1.118. oc secrets link Link secrets to a service account Example usage # Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount 2.7.1.119. oc secrets unlink Detach secrets from a service account Example usage # Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name ... 2.7.1.120. oc set build-hook Update a build hook on a build config Example usage # Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script="/var/lib/test-image.sh param1 param2 && /var/lib/done.sh" 2.7.1.121. oc set build-secret Update a build secret on a build config Example usage # Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret 2.7.1.122. oc set data Update the data within a config map or secret Example usage # Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir 2.7.1.123. oc set deployment-hook Update a deployment hook on a deployment config Example usage # Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh 2.7.1.124. oc set env Update environment variables on a pod template Example usage # Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers="c1" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp 2.7.1.125. oc set image Update the image of a pod template Example usage # Set a deployment configs's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment configs's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in yaml format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml 2.7.1.126. oc set image-lookup Change how images are resolved when deploying applications Example usage # Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all 2.7.1.127. oc set probe Update a probe on a pod template Example usage # Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30 2.7.1.128. oc set resources Update resource requests/limits on objects with pod templates Example usage # Set a deployments nginx container CPU limits to "200m and memory to 512Mi" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml 2.7.1.129. oc set route-backends Update the backends for a route Example usage # Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero 2.7.1.130. oc set selector Set the selector on a resource Example usage # Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip="None" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f - 2.7.1.131. oc set serviceaccount Update the service account of a resource Example usage # Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml 2.7.1.132. oc set subject Update the user, group, or service account in a role binding or cluster role binding Example usage # Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml 2.7.1.133. oc set triggers Update the triggers on one or more objects Example usage # Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main 2.7.1.134. oc set volumes Update volumes on a pod template Example usage # List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (pvc) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount "v1" from container "c1" # (and by removing the volume "v1" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string> 2.7.1.135. oc start-build Start a new build Example usage # Starts build from build config "hello-world" oc start-build hello-world # Starts build from a build "hello-world-1" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config "hello-world" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config "hello-world" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait 2.7.1.136. oc status Show an overview of the current project Example usage # See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest 2.7.1.137. oc tag Tag existing images into image streams Example usage # Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d 2.7.1.138. oc version Print the client and server version information Example usage # Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context oc version --short # Print the OpenShift client version information for the current context oc version --client 2.7.1.139. oc wait Experimental: Wait for a specific condition on one or many resources Example usage # Wait for the pod "busybox1" to contain the status condition of type "Ready" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity): oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod "busybox1" to contain the status phase to be "Running". oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s 2.7.1.140. oc whoami Return information about the current session Example usage # Display the currently authenticated user oc whoami 2.7.2. Additional resources OpenShift CLI administrator command reference 2.8. OpenShift CLI administrator command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) administrator commands. You must have cluster-admin or equivalent permissions to use these commands. For developer commands, see the OpenShift CLI developer command reference . Run oc adm -h to list all administrator commands or run oc <command> --help to get additional details for a specific command. 2.8.1. OpenShift CLI (oc) administrator commands 2.8.1.1. oc adm build-chain Output the inputs and dependencies of your builds Example usage # Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all 2.8.1.2. oc adm catalog mirror Mirror an operator-registry catalog Example usage # Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageContentSourcePolicy.yaml # Edit the mirroring mappings and mirror with "oc image mirror" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageContentSourcePolicies generated by oc adm catalog mirror oc delete imagecontentsourcepolicy -l operators.openshift.org/catalog=true 2.8.1.3. oc adm certificate approve Approve a certificate signing request Example usage # Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp 2.8.1.4. oc adm certificate deny Deny a certificate signing request Example usage # Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp 2.8.1.5. oc adm cordon Mark node as unschedulable Example usage # Mark node "foo" as unschedulable oc adm cordon foo 2.8.1.6. oc adm create-bootstrap-project-template Create a bootstrap project template Example usage # Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml 2.8.1.7. oc adm create-error-template Create an error page template Example usage # Output a template for the error page to stdout oc adm create-error-template 2.8.1.8. oc adm create-login-template Create a login template Example usage # Output a template for the login page to stdout oc adm create-login-template 2.8.1.9. oc adm create-provider-selection-template Create a provider selection template Example usage # Output a template for the provider selection page to stdout oc adm create-provider-selection-template 2.8.1.10. oc adm drain Drain node in preparation for maintenance Example usage # Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900 2.8.1.11. oc adm groups add-users Add users to a group Example usage # Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2 2.8.1.12. oc adm groups new Create a new group Example usage # Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name 2.8.1.13. oc adm groups prune Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm groups prune --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm groups prune --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.8.1.14. oc adm groups remove-users Remove users from a group Example usage # Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2 2.8.1.15. oc adm groups sync Sync OpenShift groups with records from an external provider Example usage # Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in a whitelist file with an LDAP server oc adm groups sync --whitelist=/path/to/whitelist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm 2.8.1.16. oc adm inspect Collect debugging data for a given resource Example usage # Collect debugging data for the "openshift-apiserver" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the "openshift-apiserver" and "kube-apiserver" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions 2.8.1.17. oc adm migrate template-instances Update template instances to point to the latest group-version-kinds Example usage # Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm 2.8.1.18. oc adm must-gather Launch a new instance of a pod for gathering debug information Example usage # Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod-dir oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh 2.8.1.19. oc adm new-project Create a new project Example usage # Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east' 2.8.1.20. oc adm node-logs Display and filter node logs Example usage # Show kubelet logs from all masters oc adm node-logs --role master -u kubelet # See what logs are available in masters in /var/logs oc adm node-logs --role master --path=/ # Display cron log file from all masters oc adm node-logs --role master --path=cron 2.8.1.21. oc adm pod-network isolate-projects Isolate project network Example usage # Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret' 2.8.1.22. oc adm pod-network join-projects Join project network Example usage # Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret' 2.8.1.23. oc adm pod-network make-projects-global Make project network global Example usage # Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share' 2.8.1.24. oc adm policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1 2.8.1.25. oc adm policy add-scc-to-group Add a security context constraint to groups Example usage # Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2 2.8.1.26. oc adm policy add-scc-to-user Add a security context constraint to users or a service account Example usage # Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1 2.8.1.27. oc adm policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml 2.8.1.28. oc adm policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml 2.8.1.29. oc adm prune builds Remove old completed and failed builds Example usage # Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm 2.8.1.30. oc adm prune deployments Remove old completed and failed deployment configs Example usage # Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm 2.8.1.31. oc adm prune groups Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.8.1.32. oc adm prune images Remove unreferenced images Example usage # See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure http protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm 2.8.1.33. oc adm release extract Extract the contents of an update payload to disk Example usage # Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported. Pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.2.2 --filter-by-os=linux/s390x 2.8.1.34. oc adm release info Display information about a release Example usage # Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.2.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.2.0 4.2.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.2.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported. Pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.2.2 --filter-by-os=linux/s390x 2.8.1.35. oc adm release mirror Mirror a release to a different image registry location Example usage # Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.3.0 --to myregistry.local/openshift/release \ --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.3.0 --to file://openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.3.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror the 4.3.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.3.0-x86_64 \ --to=registry.example.com/your/repository --apply-release-image-signature 2.8.1.36. oc adm release new Create a new OpenShift release Example usage # Create a release from the latest origin images and push to a DockerHub repo oc adm release new --from-image-stream=4.1 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a release oc adm release new --from-release registry.svc.ci.openshift.org/origin/release:v4.1 --name 4.1.1 \ -- 4.1.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.svc.ci.openshift.org/origin/release:v4.1 \ cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.svc.ci.openshift.org/origin/release:v4.1 2.8.1.37. oc adm taint Update the taints on one or more nodes Example usage # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label mylabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule 2.8.1.38. oc adm top images Show usage statistics for images Example usage # Show usage statistics for images oc adm top images 2.8.1.39. oc adm top imagestreams Show usage statistics for image streams Example usage # Show usage statistics for image streams oc adm top imagestreams 2.8.1.40. oc adm top node Display resource (CPU/memory) usage of nodes Example usage # Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME 2.8.1.41. oc adm top pod Display resource (CPU/memory) usage of pods Example usage # Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel 2.8.1.42. oc adm uncordon Mark node as schedulable Example usage # Mark node "foo" as schedulable oc adm uncordon foo 2.8.1.43. oc adm upgrade Upgrade a cluster or adjust the upgrade channel Example usage # Review the available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true 2.8.1.44. oc adm verify-image-signature Verify the image identity contained in the image signature Example usage # Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 \ --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all 2.8.2. Additional resources OpenShift CLI developer command reference | [
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhocp-4.12-for-rhel-8-x86_64-rpms\"",
"yum install openshift-clients",
"oc <command>",
"brew install openshift-cli",
"oc login -u user1",
"Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.",
"oc new-project my-project",
"Now using project \"my-project\" on server \"https://openshift.example.com:6443\".",
"oc new-app https://github.com/sclorg/cakephp-ex",
"--> Found image 40de956 (9 days old) in imagestream \"openshift/php\" under tag \"7.2\" for \"php\" Run 'oc status' to view your app.",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>",
"oc logs cakephp-ex-1-deploy",
"--> Scaling cakephp-ex-1 to 1 --> Success",
"oc project",
"Using project \"my-project\" on server \"https://openshift.example.com:6443\".",
"oc status",
"In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details.",
"oc api-resources",
"NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap",
"oc help",
"OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application",
"oc create --help",
"Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags]",
"oc explain pods",
"KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources",
"oc logout",
"Logged \"user1\" out on \"https://openshift.example.com\"",
"oc completion bash > oc_bash_completion",
"sudo cp oc_bash_completion /etc/bash_completion.d/",
"cat >>~/.zshrc<<EOF if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF",
"apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k",
"oc status",
"status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example.",
"oc project",
"Using project \"joe-project\" from context named \"joe-project/openshift1.example.com:8443/alice\" on server \"https://openshift1.example.com:8443\".",
"oc project alice-project",
"Now using project \"alice-project\" on server \"https://openshift1.example.com:8443\".",
"oc login -u system:admin -n default",
"oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true]",
"oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>]",
"oc config use-context <context_nickname>",
"oc config set <property_name> <property_value>",
"oc config unset <property_name>",
"oc config view",
"oc config view --config=<specific_filename>",
"oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0",
"oc config view",
"apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0",
"oc config set-context `oc config current-context` --namespace=<project_name>",
"oc whoami -c",
"#!/bin/bash optional argument handling if [[ \"USD1\" == \"version\" ]] then echo \"1.0.0\" exit 0 fi optional argument handling if [[ \"USD1\" == \"config\" ]] then echo USDKUBECONFIG exit 0 fi echo \"I am a plugin named kubectl-foo\"",
"chmod +x <plugin_file>",
"sudo mv <plugin_file> /usr/local/bin/.",
"oc plugin list",
"The following compatible plugins are available: /usr/local/bin/<plugin_file>",
"oc ns",
"oc krew search",
"oc krew info <plugin_name>",
"oc krew install <plugin_name>",
"oc krew list",
"oc krew upgrade <plugin_name>",
"oc krew upgrade",
"oc krew uninstall <plugin_name>",
"Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in \"pod.json\" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description-",
"Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io",
"Print the supported API versions oc api-versions",
"Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' - i.e. expand wildcard characters in file names oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap",
"Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json",
"Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true",
"View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json",
"Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx",
"Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if I can do everything in my current namespace (\"*\" means all) oc auth can-i '*' '*' # Check to see if I can get the job named \"bar\" in namespace \"foo\" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace \"foo\" oc auth can-i --list --namespace=foo",
"Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml",
"Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80",
"Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new",
"Print the address of the control plane and cluster services oc cluster-info",
"Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state",
"Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # Kubectl shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\\.kube\\completion.ps1 Add-Content USDPROFILE \"USDHOME\\.kube\\completion.ps1\" ## Execute completion code in the profile Add-Content USDPROFILE \"if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }\" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE",
"Display the current-context oc config current-context",
"Delete the minikube cluster oc config delete-cluster minikube",
"Delete the context for the minikube cluster oc config delete-context minikube",
"Delete the minikube user oc config delete-user minikube",
"List the clusters that oc knows about oc config get-clusters",
"List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context",
"List the users that oc knows about oc config get-users",
"Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name",
"Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true",
"Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set proxy url for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4",
"Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin",
"Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin args for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-",
"Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace",
"Use the context for the minikube cluster oc config use-context minikube",
"Show merged kubeconfig settings oc config view # Show merged kubeconfig settings and raw certificate data oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'",
"!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar",
"Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json",
"Create a new build oc create build myapp",
"Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10",
"Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named \"pod-reader\" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named \"foo\" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named \"foo\" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name \"foo\" with NonResourceURL specified oc create clusterrole \"foo\" --verb=get --non-resource-url=/logs/* # Create a cluster role name \"monitoring\" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"",
"Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1",
"Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env",
"Create a cron job oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date",
"Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701",
"Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx",
"Create an identity with identity provider \"acme_ldap\" and the identity provider username \"adamjones\" oc create identity acme_ldap:adamjones",
"Create a new image stream oc create imagestream mysql",
"Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0",
"Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\" oc create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\" # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\" oc create ingress catch-all --class=otheringress --rule=\"/path=svc:port\" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" --annotation ingress.annotation1=foo --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default --rule=\"foo.com/=svc:port\" --rule=\"foo.com/admin/=svcadmin:portadmin\" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default --rule=\"foo.com/path*=svc:8080\" --rule=\"bar.com/admin*=svc2:http\" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default --rule=\"foo.com/=svc:https,tls\" --rule=\"foo.com/path/subpath*=othersvc:8080\" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default --rule=\"foo.com/*=svc:8080,tls=secret1\" # Create an ingress with a default backend oc create ingress ingdefault --class=default --default-backend=defaultsvc:http --rule=\"foo.com/*=svc:8080,tls=secret1\"",
"Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named \"a-cronjob\" oc create job test-job --from=cronjob/a-cronjob",
"Create a new namespace named my-namespace oc create namespace my-namespace",
"Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50%",
"Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"",
"Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort",
"Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named \"pod-reader\" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named \"foo\" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named \"foo\" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status",
"Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1",
"Create an edge route named \"my-route\" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets",
"Create a passthrough route named \"my-route\" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com",
"Create a route named \"my-route\" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend",
"If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using: oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json",
"Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env",
"Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key",
"Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip=\"None\"",
"Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com",
"Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080",
"Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080",
"Create a new service account named my-service-account oc create serviceaccount my-service-account",
"Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific uid oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc",
"Create a user with the username \"ajones\" and the display name \"Adam Jones\" oc create user ajones --full-name=\"Adam Jones\"",
"Map the identity \"acme_ldap:adamjones\" to the user \"ajones\" oc create useridentitymapping acme_ldap:adamjones ajones",
"Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns",
"Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' - i.e. expand wildcard characters in file names oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names \"baz\" and \"foo\" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all",
"Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in \"pod.json\" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe po -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend",
"Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f -",
"Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR=\"nano\" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the deployment/mydeployment's status subresource oc edit deployment mydeployment --subresource='status'",
"Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date",
"Get the documentation of the resource and its fields oc explain pods # Get the documentation of a specific field of a resource oc explain pods.spec.containers",
"Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx",
"Extract the secret \"test\" to the current directory oc extract secret/test # Extract the config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map \"nginx\" to STDOUT oc extract configmap/nginx --to=- # Extract only the key \"nginx.conf\" from config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf",
"List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List status subresource for a single pod. oc get pod web-pod-13je7 --subresource status",
"Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt",
"Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{\"Entrypoint\":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: Wildcard filter is not supported with append. Pass a single os/arch to append oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz",
"Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract. Pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]",
"Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64",
"Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only # the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=.*",
"Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm",
"Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6",
"Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in \"pod.json\" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar-",
"Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass",
"Log out oc logout",
"Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container",
"List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match \"ruby\" oc new-app --search ruby # Search for \"ruby\", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for \"ruby\" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml",
"Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp",
"Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name=\"Web Team Development\" --description=\"Development project for the web team.\"",
"Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe namespaces -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh",
"Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\\n unschedulable: true' # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch oc patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]' # Update a deployment's replicas through the scale subresource using a merge patch. oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'",
"List all available plugins oc plugin list",
"Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1",
"Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml",
"Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml",
"Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000",
"Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f -",
"Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project",
"List all projects oc projects",
"To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api",
"Display information about the integrated registry oc registry info",
"Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS",
"Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*USD/\\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json",
"Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json",
"Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx",
"View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3",
"Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json",
"Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx",
"Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx",
"Resume an already paused deployment oc rollout resume dc/nginx",
"Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend",
"Watch the status of the latest rollout oc rollout status dc/nginx",
"Roll back to the previous deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3",
"Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/sheduled",
"Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir",
"Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container oc run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\" # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container oc run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>",
"Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in \"foo.yaml\" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/foo rc/bar rc/baz # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web",
"Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount",
"Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name",
"Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script=\"/var/lib/test-image.sh param1 param2 && /var/lib/done.sh\"",
"Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret",
"Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir",
"Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh",
"Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers=\"c1\" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp",
"Set a deployment configs's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment configs's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in yaml format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml",
"Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all",
"Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30",
"Set a deployments nginx container CPU limits to \"200m and memory to 512Mi\" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml",
"Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero",
"Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -",
"Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml",
"Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml",
"Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main",
"List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (pvc) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount \"v1\" from container \"c1\" # (and by removing the volume \"v1\" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string>",
"Starts build from build config \"hello-world\" oc start-build hello-world # Starts build from a previous build \"hello-world-1\" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config \"hello-world\" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config \"hello-world\" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait",
"See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest",
"Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d",
"Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context oc version --short # Print the OpenShift client version information for the current context oc version --client",
"Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity): oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\". oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s",
"Display the currently authenticated user oc whoami",
"Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all",
"Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageContentSourcePolicy.yaml # Edit the mirroring mappings and mirror with \"oc image mirror\" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageContentSourcePolicies generated by oc adm catalog mirror oc delete imagecontentsourcepolicy -l operators.openshift.org/catalog=true",
"Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp",
"Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp",
"Mark node \"foo\" as unschedulable oc adm cordon foo",
"Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml",
"Output a template for the error page to stdout oc adm create-error-template",
"Output a template for the login page to stdout oc adm create-login-template",
"Output a template for the provider selection page to stdout oc adm create-provider-selection-template",
"Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900",
"Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2",
"Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name",
"Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm groups prune --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm groups prune --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2",
"Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in a whitelist file with an LDAP server oc adm groups sync --whitelist=/path/to/whitelist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm",
"Collect debugging data for the \"openshift-apiserver\" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the \"openshift-apiserver\" and \"kube-apiserver\" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions",
"Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm",
"Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod-dir oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh",
"Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'",
"Show kubelet logs from all masters oc adm node-logs --role master -u kubelet # See what logs are available in masters in /var/logs oc adm node-logs --role master --path=/ # Display cron log file from all masters oc adm node-logs --role master --path=cron",
"Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret'",
"Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret'",
"Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share'",
"Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1",
"Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2",
"Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1",
"Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml",
"Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml",
"Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm",
"Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm",
"Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure http protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm",
"Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported. Pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.2.2 --filter-by-os=linux/s390x",
"Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.2.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.2.0 4.2.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.2.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported. Pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.2.2 --filter-by-os=linux/s390x",
"Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.3.0 --to myregistry.local/openshift/release --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.3.0 --to file://openshift/release --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.3.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release --release-image-signature-to-dir /tmp/releases # Mirror the 4.3.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.3.0-x86_64 --to=registry.example.com/your/repository --apply-release-image-signature",
"Create a release from the latest origin images and push to a DockerHub repo oc adm release new --from-image-stream=4.1 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a previous release oc adm release new --from-release registry.svc.ci.openshift.org/origin/release:v4.1 --name 4.1.1 --previous 4.1.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.svc.ci.openshift.org/origin/release:v4.1 cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.svc.ci.openshift.org/origin/release:v4.1",
"Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label mylabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule",
"Show usage statistics for images oc adm top images",
"Show usage statistics for image streams oc adm top imagestreams",
"Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME",
"Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel",
"Mark node \"foo\" as schedulable oc adm uncordon foo",
"Review the available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true",
"Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/cli_tools/openshift-cli-oc |
Chapter 11. SelfSubjectAccessReview [authorization.k8s.io/v1] | Chapter 11. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 11.1.1. .spec Description SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface 11.1.2. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 11.1.3. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 11.1.4. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 11.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectaccessreviews POST : create a SelfSubjectAccessReview 11.2.1. /apis/authorization.k8s.io/v1/selfsubjectaccessreviews Table 11.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SelfSubjectAccessReview Table 11.2. Body parameters Parameter Type Description body SelfSubjectAccessReview schema Table 11.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectAccessReview schema 201 - Created SelfSubjectAccessReview schema 202 - Accepted SelfSubjectAccessReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authorization_apis/selfsubjectaccessreview-authorization-k8s-io-v1 |
Chapter 3. Optimizing Virtualization Performance with virt-manager | Chapter 3. Optimizing Virtualization Performance with virt-manager This chapter covers performance tuning options available in virt-manager , a desktop tool for managing guest virtual machines. 3.1. Operating System Details and Devices 3.1.1. Specifying Guest Virtual Machine Details The virt-manager tool provides different profiles depending on what operating system type and version are selected for a new guest virtual machine. When creating a guest, you should provide as many details as possible; this can improve performance by enabling features available for your specific type of guest. See the following example screen capture of the virt-manager tool. When creating a new guest virtual machine, always specify your intended OS type and Version : Figure 3.1. Provide the OS type and Version 3.1.2. Remove Unused Devices Removing unused or unnecessary devices can improve performance. For instance, a guest tasked as a web server is unlikely to require audio features or an attached tablet. See the following example screen capture of the virt-manager tool. Click the Remove button to remove unnecessary devices: Figure 3.2. Remove unused devices | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-virt_manager |
Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Google cloud | Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Google cloud 4.1. Replacing operational or failed storage devices on Google Cloud installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Google Cloud installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Google Cloud installer-provisioned infrastructure Replacing failed nodes on Google Cloud installer-provisioned infrastructures . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_google_cloud |
Chapter 7. Dynamic provisioning | Chapter 7. Dynamic provisioning 7.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The Red Hat OpenShift Service on AWS persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in Red Hat OpenShift Service on AWS. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs. 7.2. Available dynamic provisioning plugins Red Hat OpenShift Service on AWS provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plugin name Notes Amazon Elastic Block Store (Amazon EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. Important Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 7.3. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator installs a default storage class. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plugin types. 7.3.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. 7.3.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 7.3.3. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: "10" 3 encrypted: "true" 4 kmsKeyId: keyvalue 5 fsType: ext4 6 1 (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 (required) Select from io1 , gp3 , sc1 , st1 . The default is gp3 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 3 Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 4 Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false . 5 Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 6 Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 7.4. Changing the default storage class Use the following procedure to change the default storage class. For example, if you have two defined storage classes, gp3 and standard , and you want to change the default storage class from gp3 to standard . Prerequisites Access to the cluster with cluster-admin privileges. Procedure To change the default storage class: List the storage classes: USD oc get storageclass Example output NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) indicates the default storage class. Make the desired storage class the default. For the desired storage class, set the storageclass.kubernetes.io/is-default-class annotation to true by running the following command: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Note You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . Remove the default storage class setting from the old default storage class. For the old default storage class, change the value of the storageclass.kubernetes.io/is-default-class annotation to false by running the following command: USD oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs | [
"kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"",
"kubernetes.io/description: My Storage Class Description",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6",
"oc get storageclass",
"NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc get storageclass",
"NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/storage/dynamic-provisioning |
Chapter 8. sVirt | Chapter 8. sVirt sVirt is a technology included in Red Hat Enterprise Linux that integrates SELinux and virtualization. sVirt applies Mandatory Access Control (MAC) to improve security when using virtual machines. The main reasons for integrating these technologies are to improve security and harden the system against bugs in the hypervisor that might be used as an attack vector aimed toward the host or to another virtual machine. This chapter describes how sVirt integrates with virtualization technologies in Red Hat Enterprise Linux. Non-Virtualized Environment In a non-virtualized environment, hosts are separated from each other physically and each host has a self-contained environment, consisting of services such as a Web server, or a DNS server. These services communicate directly to their own user space, host kernel and physical host, offering their services directly to the network. The following image represents a non-virtualized environment: Virtualized Environment In a virtualized environment, several operating systems can be housed (as "guests") within a single host kernel and physical host. The following image represents a virtualized environment: 8.1. Security and Virtualization When services are not virtualized, machines are physically separated. Any exploit is usually contained to the affected machine, with the obvious exception of network attacks. When services are grouped together in a virtualized environment, extra vulnerabilities emerge in the system. If there is a security flaw in the hypervisor that can be exploited by a guest instance, this guest may be able to not only attack the host, but also other guests running on that host. This is not theoretical; attacks already exist on hypervisors. These attacks can extend beyond the guest instance and could expose other guests to attack. sVirt is an effort to isolate guests and limit their ability to launch further attacks if exploited. This is demonstrated in the following image, where an attack cannot break out of the virtual machine and extend to another host instance: SELinux introduces a pluggable security framework for virtualized instances in its implementation of Mandatory Access Control (MAC). The sVirt framework allows guests and their resources to be uniquely labeled. Once labeled, rules can be applied which can reject access between different guests. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-svirt |
13.2.10. SSSD and Identity Providers (Domains) | 13.2.10. SSSD and Identity Providers (Domains) SSSD recognizes domains , which are entries within the SSSD configuration file associated with different, external data sources. Domains are a combination of an identity provider (for user information) and, optionally, other providers such as authentication (for authentication requests) and for other operations, such as password changes. (The identity provider can also be used for all operations, if all operations are performed within a single domain or server.) SSSD works with different LDAP identity providers (including OpenLDAP, Red Hat Directory Server, and Microsoft Active Directory) and can use native LDAP authentication, Kerberos authentication, or provider-specific authentication protocols (such as Active Directory). A domain configuration defines the identity provider , the authentication provider , and any specific configuration to access the information in those providers. There are several types of identity and authentication providers: LDAP, for general LDAP servers Active Directory (an extension of the LDAP provider type) Identity Management (an extension of the LDAP provider type) Local, for the local SSSD database Proxy Kerberos (authentication provider only) The identity and authentication providers can be configured in different combinations in the domain entry. The possible combinations are listed in Table 13.6, "Identity Store and Authentication Type Combinations" . Table 13.6. Identity Store and Authentication Type Combinations Identification Provider Authentication Provider Identity Management (LDAP) Identity Management (LDAP) Active Directory (LDAP) Active Directory (LDAP) Active Directory (LDAP) Kerberos LDAP LDAP LDAP Kerberos proxy LDAP proxy Kerberos proxy proxy Along with the domain entry itself, the domain name must be added to the list of domains that SSSD will query. For example: global attributes are available to any type of domain, such as cache and time out settings. Each identity and authentication provider has its own set of required and optional configuration parameters. Table 13.7. General [domain] Configuration Parameters Parameter Value Format Description id_provider string Specifies the data back end to use for this domain. The supported identity back ends are: ldap ipa (Identity Management in Red Hat Enterprise Linux) ad (Microsoft Active Directory) proxy, for a legacy NSS provider, such as nss_nis . Using a proxy ID provider also requires specifying the legacy NSS library to load to start successfully, set in the proxy_lib_name option. local, the SSSD internal local provider auth_provider string Sets the authentication provider used for the domain. The default value for this option is the value of id_provider . The supported authentication providers are ldap, ipa, ad, krb5 (Kerberos), proxy, and none. min_id,max_id integer Optional. Specifies the UID and GID range for the domain. If a domain contains entries that are outside that range, they are ignored. The default value for min_id is 1 ; the default value for max_id is 0 , which is unlimited. Important The default min_id value is the same for all types of identity provider. If LDAP directories are using UID numbers that start at one, it could cause conflicts with users in the local /etc/passwd file. To avoid these conflicts, set min_id to 1000 or higher as possible. cache_credentials Boolean Optional. Specifies whether to store user credentials in the local SSSD domain database cache. The default value for this parameter is false . Set this value to true for domains other than the LOCAL domain to enable offline authentication. entry_cache_timeout integer Optional. Specifies how long, in seconds, SSSD should cache positive cache hits. A positive cache hit is a successful query. use_fully_qualified_names Boolean Optional. Specifies whether requests to this domain require fully qualified domain names. If set to true , all requests to this domain must use fully qualified domain names. It also means that the output from the request displays the fully-qualified name. Restricting requests to fully qualified user names allows SSSD to differentiate between domains with users with conflicting user names. If use_fully_qualified_names is set to false , it is possible to use the fully-qualified name in the requests, but only the simplified version is displayed in the output. SSSD can only parse names based on the domain name, not the realm name. The same name can be used for both domains and realms, however. | [
"[sssd] domains = LOCAL, Name [domain/ Name ] id_provider = type auth_provider = type provider_specific = value global = value"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/configuring_domains |
Chapter 2. Red Hat Quay prerequisites | Chapter 2. Red Hat Quay prerequisites Before deploying Red Hat Quay, you must provision image storage, a database, and Redis. 2.1. Image storage backend Red Hat Quay stores all binary blobs in its storage backend. Local storage Red Hat Quay can work with local storage, however this should only be used for proof of concept or test setups, as the durability of the binary blobs cannot be guaranteed. HA storage setup For a Red Hat Quay HA deployment, you must provide HA image storage, for example: Red Hat OpenShift Data Foundation , previously known as Red Hat OpenShift Container Storage, is software-defined storage for containers. Engineered as the data and storage services platform for OpenShift Container Platform, Red Hat OpenShift Data Foundation helps teams develop and deploy applications quickly and efficiently across clouds. More information can be found at https://www.redhat.com/en/technologies/cloud-computing/openshift-data-foundation . Ceph Object Gateway (also called RADOS Gateway) is an example of a storage solution that can provide the object storage needed by Red Hat Quay. Detailed instructions on how to use Ceph storage as a highly available storage backend can be found in the Quay High Availability Guide . Further information about Red Hat Ceph Storage and HA setups can be found in the Red Hat Ceph Storage Architecture Guide Geo-replication Local storage cannot be used for geo-replication, so a supported on premise or cloud based object storage solution must be deployed. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Red Hat Quay instance, and will then be replicated, in the background, to the other storage engines. This requires the image storage to be accessible from all regions. 2.1.1. Supported image storage engines Red Hat Quay supports the following on premise storage types: Ceph/Rados RGW OpenStack Swift Red Hat OpenShift Data Foundation 4 (through NooBaa) Red Hat Quay supports the following public cloud storage engines: Amazon Web Services (AWS) S3 Google Cloud Storage Azure Blob Storage Hitachi Content Platform (HCP) 2.2. Database backend Red Hat Quay stores all of its configuration information in the config.yaml file. Registry metadata, for example, user information, robot accounts, team, permissions, organizations, images, tags, manifests, etc. are stored inside of the database backend. Logs can be pushed to ElasticSearch if required. PostgreSQL is the preferred database backend because it can be used for both Red Hat Quay and Clair. A future version of Red Hat Quay will remove support for using MySQL and MariaDB as the database backend, which has been deprecated since the Red Hat Quay 3.6 release. Until then, MySQL is still supported according to the support matrix , but will not receive additional features or explicit testing coverage. The Red Hat Quay Operator supports only PostgreSQL deployments when the database is managed. If you want to use MySQL, you must deploy it manually and set the database component to managed: false . Deploying Red Hat Quay in a highly available (HA) configuration requires that your database services are provisioned for high availability. If Red Hat Quay is running on public cloud infrastructure, it is recommended that you use the PostgreSQL services provided by your cloud provider, however MySQL is also supported. Geo-replication requires a single, shared database that is accessible from all regions. 2.3. Redis Red Hat Quay stores builder logs inside a Redis cache. Because the data stored is ephemeral, Redis does not need to be highly available even though it is stateful. If Redis fails, you will lose access to build logs, builders, and the garbage collector service. Additionally, user events will be unavailable. You can use a Redis image from the Red Hat Software Collections or from any other source you prefer. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_architecture/arch-prereqs |
18.9. Custom Cache Stores | 18.9. Custom Cache Stores Custom cache stores are a customized implementation of Red Hat JBoss Data Grid cache stores. In order to create a custom cache store (or loader), implement all or a subset of the following interfaces based on the need: CacheLoader CacheWriter AdvancedCacheLoader AdvancedCacheWriter ExternalStore AdvancedLoadWriteStore See Section 17.1, "Cache Loaders and Cache Writers" for individual functions of the interfaces. Note If the AdvancedCacheWriter is not implemented, the expired entries cannot be purged or cleared using the given writer. Note If the AdvancedCacheLoader is not implemented, the entries stored in the given loader will not be used for preloading and map/reduce iterations. To migrate the existing cache store to the new API or to write a new store implementation, use SingleFileStore as an example. To view the SingleFileStore example code, download the JBoss Data Grid source code. Use the following procedure to download SingleFileStore example code from the Customer Portal: Procedure 18.10. Download JBoss Data Grid Source Code To access the Red Hat Customer Portal, navigate to https://access.redhat.com/home in a browser. Click Downloads . In the section labeled JBoss Development and Management , click Red Hat JBoss Data Grid . Enter the relevant credentials in the Red Hat Login and Password fields and click Log In . From the list of downloadable files, locate Red Hat JBoss Data Grid USD{VERSION} Source Code and click Download . Save and unpack it in a desired location. Locate the SingleFileStore source code by navigating through jboss-datagrid-6.6.1-sources/infinispan-6.4.1.Final-redhat-1-src/core/src/main/java/org/infinispan/persistence/file/SingleFileStore.java . Report a bug 18.9.1. Custom Cache Store Maven Archetype An easy way to get started with developing a Custom Cache Store is to use the Maven archetype; creating an archetype will generate a new Maven project with the correct directory layout and sample code. Procedure 18.11. Generate a Maven Archetype Ensure the JBoss Data Grid Maven repository has been installed by following the instructions in the Red Hat JBoss Data Grid Getting Started Guide . Open a command prompt and execute the following command to generate an archetype in the current directory: Note The above command has been broken into multiple lines for readability; however, when executed this command and all arguments must be on a single line. Report a bug 18.9.2. Custom Cache Store Configuration (Remote Client-Server Mode) The following is a sample configuration for a custom cache store in Red Hat JBoss Data Grid's Remote Client-Server mode: Example 18.2. Custom Cache Store Configuration For details about the elements and parameters used in this sample configuration, see Section 18.3, "Cache Store Configuration Details (Remote Client-Server Mode)" . Report a bug 18.9.2.1. Option 1: Add Custom Cache Store using deployments (Remote Client-Server Mode) Procedure 18.12. Deploy Custom Cache Store .jar file to JDG server using deployments Add the following Java service loader file META-INF/services/org.infinispan.persistence.spi.AdvancedLoadWriteStore to the module and add a reference to the Custom Cache Store Class, such as seen below: Copy the jar to the USDJDG_HOME/standalone/deployments/ directory. If the .jar file is available the server the following message will be displayed in the logs: In the infinispan-core subsystem add an entry for the cache inside a cache-container , specifying the class that overrides one of the interfaces from Section 18.9, "Custom Cache Stores" : Report a bug 18.9.2.2. Option 2: Add Custom Cache Store using the CLI (Remote Client-Server Mode) Procedure 18.13. Deploying Custom Cache Store .jar file to JDG server using the CLI Connect to the JDG server by running the below command: Deploy the .jar file by executing the following command: Report a bug 18.9.2.3. Option 3: Add Custom Cache Store using JON (Remote Client-Server Mode) Procedure 18.14. Deploying Custom Cache Store .jar file to JDG server using JBoss Operation Network Log into JON. Navigate to Bundles along the upper bar. Click the New button and choose the Recipe radio button. Insert a deployment bundle file content that references the store, similar to the following example: Proceed with button to Bundle Groups configuration wizard page and proceed with button once again. Locate custom cache store .jar file using file uploader and Upload the file. Proceed with button to Summary configuration wizard page. Proceed with Finish button in order to finish bundle configuration. Navigate back to the Bundles tab along the upper bar. Select the newly created bundle and click Deploy button. Enter Destination Name and choose the proper Resource Group; this group should only consist of JDG servers. Choose Install Directory from Base Location 's radio box group. Enter /standalone/deployments in Deployment Directory text field below. Proceed with the wizard using the default options. Validate the deployment using the following command on the server's host: Confirm the bundle has been installed in USDJDG_HOME/standalone/deployments . Once the above steps are completed the .jar file will be successfully uploaded and registered by the JDG server. Report a bug 18.9.3. Custom Cache Store Configuration (Library Mode) The following is a sample configuration for a custom cache store in Red Hat JBoss Data Grid's Library mode: Example 18.3. Custom Cache Store Configuration For details about the elements and parameters used in this sample configuration, see Section 18.2, "Cache Store Configuration Details (Library Mode)" . Note The Custom Cache Store classes must be in the classpath where Red Hat JBoss Data Grid is used. Most often this is accomplished by packaging the Custom Cache Store in with the application; however, it may also be accomplished by defining the Custom Cache Store as a module to EAP and listed as a dependency, as discussed in the Red Hat JBoss Enterprise Application Platform Administration and Configuration Guide . Report a bug | [
"mvn -Dmaven.repo.local=\"path/to/unzipped/jboss-datagrid-6.6.0-maven-repository/\" archetype:generate -DarchetypeGroupId=org.infinispan -DarchetypeArtifactId=custom-cache-store-archetype -DarchetypeVersion=6.4.1.Final-redhat-1",
"<distributed-cache name=\"cacheStore\" mode=\"SYNC\" segments=\"20\" owners=\"2\" remote-timeout=\"30000\"> <store class=\"my.package.CustomCacheStore\"> <property name=\"customStoreProperty\">10</property> </store> </distributed-cache>",
"my.package.CustomCacheStore",
"JBAS010287: Registering Deployed Cache Store service for store 'my.package.CustomCacheStore'",
"<subsystem xmlns=\"urn:infinispan:server:core:6.2\"> [...] <distributed-cache name=\"cacheStore\" mode=\"SYNC\" segments=\"20\" owners=\"2\" remote-timeout=\"30000\"\"> <store class=\"my.package.CustomCacheStore\"> <!-- If custom properties are included these may be specified as below --> <property name=\"customStoreProperty\">10</property> </store> </distributed-cache> [...] </subsystem>",
"[USDJDG_HOME] USD bin/cli.sh --connect=USDIP:USDPORT",
"deploy /path/to/artifact.jar",
"<?xml version=\"1.0\"?> <project name=\"cc-bundle\" default=\"main\" xmlns:rhq=\"antlib:org.rhq.bundle\"> <rhq:bundle name=\"Mongo DB Custom Cache Store\" version=\"1.0\" description=\"Custom Cache Store\"> <rhq:deployment-unit name=\"JDG\" compliance=\"full\"> <rhq:file name=\"custom-store.jar\"/> </rhq:deployment-unit> </rhq:bundle> <target name=\"main\" /> </project>",
"find USDJDG_HOME -name \"custom-store.jar\"",
"<persistence> <store class=\"org.infinispan.custom.CustomCacheStore\" preload=\"true\" shared=\"true\"> <properties> <property name=\"customStoreProperty\" value=\"10\" /> </properties> </store> </persistence>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-Custom_Cache_Stores |
Chapter 9. Detecting Dead Connections | Chapter 9. Detecting Dead Connections Sometimes clients stop unexpectedly and do not have a chance to clean up their resources. If this occurs, it can leave resources in a faulty state and result in the broker running out of memory or other system resources. The broker detects that a client's connection was not properly shut down at garbage collection time. The connection is then closed and a message similar to the one below is written to the log. The log captures the exact line of code where the client session was instantiated. This enables you to identify the error and correct it. 1 The line in the client code where the connection was instantiated. 9.1. Connection Time-To-Live Because the network connection between the client and the server can fail and then come back online, allowing a client to reconnect, AMQ Broker waits to clean up inactive server-side resources. This wait period is called a time-to-live (TTL). The default TTL for a network-based connection is 60000 milliseconds (1 minute). The default TTL on an in-VM connection is -1 , which means the broker never times out the connection on the broker side. Configuring Time-To-Live on the Broker If you do not want clients to specify their own connection TTL, you can set a global value on the broker side. This can be done by specifying the connection-ttl-override element in the broker configuration. The logic to check connections for TTL violations runs periodically on the broker, as determined by the connection-ttl-check-interval element. Procedure Edit <broker_instance_dir> /etc/broker.xml by adding the connection-ttl-override configuration element and providing a value for the time-to-live, as in the example below. <configuration> <core> ... <connection-ttl-override>30000</connection-ttl-override> 1 <connection-ttl-check-interval>1000</connection-ttl-check-interval> 2 ... </core> </configuration> 1 The global TTL for all connections is set to 30000 milliseconds. The default value is -1 , which allows clients to set their own TTL. 2 The interval between checks for dead connections is set to 1000 milliseconds. By default, the checks are done every 2000 milliseconds. 9.2. Disabling Asynchronous Connection Execution Most packets received on the broker side are executed on the remoting thread. These packets represent short-running operations and are always executed on the remoting thread for performance reasons. However, some packet types are executed using a thread pool instead of the remoting thread, which adds a little network latency. The packet types that use the thread pool are implemented within the Java classes listed below. The classes are all found in the package org.apache.actiinvemq.artemis.core.protocol.core.impl.wireformat . RollbackMessage SessionCloseMessage SessionCommitMessage SessionXACommitMessage SessionXAPrepareMessage SessionXARollbackMessage Procedure To disable asynchronous connection execution, add the async-connection-execution-enabled configuration element to <broker_instance_dir> /etc/broker.xml and set it to false , as in the example below. The default value is true . <configuration> <core> ... <async-connection-execution-enabled>false</async-connection-execution-enabled> ... </core> </configuration> Additional resources To learn how to configure the AMQ Core Protocol JMS client to detect dead connections, see Detecting dead connections in the AMQ Core Protocol JMS documentation. To learn how to configure a connection time-to-live in the AMQ Core Protocol JMS client, see Configuring time-to-live in the AMQ Core Protocol JMS documentation. | [
"[Finalizer] 20:14:43,244 WARNING [org.apache.activemq.artemis.core.client.impl.DelegatingSession] I'm closing a JMS Conection you left open. Please make sure you close all connections explicitly before let ting them go out of scope! [Finalizer] 20:14:43,244 WARNING [org.apache.activemq.artemis.core.client.impl.DelegatingSession] The session you didn't close was created here: java.lang.Exception at org.apache.activemq.artemis.core.client.impl.DelegatingSession.<init>(DelegatingSession.java:83) at org.acme.yourproject.YourClass (YourClass.java:666) 1",
"<configuration> <core> <connection-ttl-override>30000</connection-ttl-override> 1 <connection-ttl-check-interval>1000</connection-ttl-check-interval> 2 </core> </configuration>",
"<configuration> <core> <async-connection-execution-enabled>false</async-connection-execution-enabled> </core> </configuration>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/dead_connections |
Chapter 4. Configuring OAuth clients | Chapter 4. Configuring OAuth clients Several OAuth clients are created by default in OpenShift Container Platform. You can also register and configure additional OAuth clients. 4.1. Default OAuth clients The following OAuth clients are automatically created when starting the OpenShift Container Platform API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. openshift-cli-client Requests tokens by using a local HTTP server fetching an authorization code grant. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host 4.2. Registering an additional OAuth client If you need an additional OAuth client to manage authentication for your OpenShift Container Platform cluster, you can register one. Procedure To register additional OAuth clients: USD oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: "..." 2 redirectURIs: - "http://www.example.com/" 3 grantMethod: prompt 4 ') 1 The name of the OAuth client is used as the client_id parameter when making requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token . 2 The secret is used as the client_secret parameter when making requests to <namespace_route>/oauth/token . 3 The redirect_uri parameter specified in requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token must be equal to or prefixed by one of the URIs listed in the redirectURIs parameter value. 4 The grantMethod is used to determine what action to take when this client requests tokens and has not yet been granted access by the user. Specify auto to automatically approve the grant and retry the request, or prompt to prompt the user to approve or deny the grant. 4.3. Configuring token inactivity timeout for an OAuth client You can configure OAuth clients to expire OAuth tokens after a set period of inactivity. By default, no token inactivity timeout is set. Note If the token inactivity timeout is also configured in the internal OAuth server configuration, the timeout that is set in the OAuth client overrides that value. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an identity provider (IDP). Procedure Update the OAuthClient configuration to set a token inactivity timeout. Edit the OAuthClient object: USD oc edit oauthclient <oauth_client> 1 1 Replace <oauth_client> with the OAuth client to configure, for example, console . Add the accessTokenInactivityTimeoutSeconds field and set your timeout value: apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: ... accessTokenInactivityTimeoutSeconds: 600 1 1 The minimum allowed timeout value in seconds is 300 . Save the file to apply the changes. Verification Log in to the cluster with an identity from your IDP. Be sure to use the OAuth client that you just configured. Perform an action and verify that it was successful. Wait longer than the configured timeout without using the identity. In this procedure's example, wait longer than 600 seconds. Try to perform an action from the same identity's session. This attempt should fail because the token should have expired due to inactivity longer than the configured timeout. 4.4. Additional resources OAuthClient [oauth.openshift.io/v1 ] | [
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host",
"oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: \"...\" 2 redirectURIs: - \"http://www.example.com/\" 3 grantMethod: prompt 4 ')",
"oc edit oauthclient <oauth_client> 1",
"apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: accessTokenInactivityTimeoutSeconds: 600 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/configuring-oauth-clients |
Chapter 7. Web Administration Interface Navigation | Chapter 7. Web Administration Interface Navigation 7.1. Web Administration Default Landing Interface After logging in to the Web Administration interface, all the managed and unmanaged clusters are displayed in a rows format with the corresponding cluster attributes. Note To identify the version of Red Hat Gluster Storage installed, see the Cluster Version attribute. Managed cluster are the clusters that are successfully imported by Web administration for monitoring purposes. Unmanaged clusters are the clusters that are ready to be imported by Web Administration. Cluster Attributes The following are the cluster attributes that are displayed in the cluster row: Cluster name : the name of the cluster Cluster Version : the version of Red Hat Gluster Storage installed Managed : whether the cluster is imported or ready to be imported Hosts : the number of hosts or nodes part of the cluster Volumes : the number of volumes part of the hosts Alerts : the number of alerts generated by the system for different tasks Volume Profiling : whether Enabled, Disabled or Mixed Note Mixed cluster attribute signifies a cluster containing at least one volume with profiling enabled and at least one with profiling disabled. Cluster Status : whether the cluster is ready for use or ready to be imported. The cluster state from the following: Ready for Use Ready to be Imported Ready for expansion Tasks in progress Actionable Buttons : the Import button, monitoring Dashboard button and an inline menu for the following administrative operations: Enable and disable volume profiling Unmanage Cluster Expand Cluster To identify managed and unmanaged clusters, view the cluster attributes. Unmanaged Cluster : An unmanaged cluster displays the Managed attribute as No . Managed Cluster : A managed cluster displays the Managed attribute as Yes . Accessing Monitoring Dashboard The Clusters tab provides a shortcut button to access the Grafana Monitoring Dashboard. At the right hand side of a cluster row, click on Dashboard and you will be redirected to the Grafana Monitoring dashboard. 7.2. Web Administration Interface Switcher The Web Administration interface provides a menu to select and switch interface views displaying a common clusters interface and a cluster-specific interface. Accessing Interface Switcher At the top left of the default landing page, to the label Red Hat Gluster Storage Web Administration, a drop-down menu is available. The drop-down menu provides: All Clusters view: the default selection after logging in that displays all managed and unmanaged clusters. Cluster-specific view: option to select a specific managed cluster. To select a cluster-specific Interface, click on the down-down menu and select the specific managed cluster. Note Only managed clusters are available to select in the drop-down menu The cluster-specific interface is displayed with a left navigation pane for Hosts, Volumes, Tasks, and Events associated with the selected cluster. To switch back to the default landing interface view displaying all clusters, select All Clusters from the drop-down menu. 7.3. Web Administration Cluster-specific Interface Navigation The cluster-specific interface provides a vertical navigation pane available at the left hand side of the interface to conveniently access the different elements of the clusters. The navigation pane provides access to the following menus: Hosts: hosts view and monitoring dashboard shortcut Volumes: Volumes view and monitoring dashboard shortcut Tasks: view completed and failed system tasks Events: view all the system-wide events 7.3.1. Clusters View and Monitoring Dashboard Shortcut The Clusters tab in the navigation pane lists all the imported clusters in a rows format. Each row shows the individual cluster attributes such as the version of the cluster, whether managed or unmanaged and the status of Volume Profiling whether enabled or disabled. Figure 7.1. Clusters View Accessing Monitoring Dashboard The Clusters tab provides a shortcut button to access the Grafana Monitoring Dashboard. At the right hand side of a cluster row, click on Dashboard and you will be redirected to the Grafana Monitoring dashboard. 7.3.2. Hosts View and Monitoring Dashboard Shortcut The Hosts tab in the navigation pane lists all the accepted hosts assigned to different clusters. The Hosts can be filtered by the Host Name or Status. Figure 7.2. Hosts View Accessing Monitoring Dashboard The Hosts tab provides a shortcut button to access the Grafana Monitoring Dashboard. At the right hand side of a Host row, click on Dashboard , and you will be redirected to the Grafana Monitoring dashboard. 7.3.3. Events View The Events view lists all the events occurred in the system. To view more detail of a specific event: copy the task ID or the job ID if available in the event listing to the task ID filter of the Tasks view interface. Figure 7.3. Events View 7.3.4. Tasks View The Web Administration consists of a sizeable number of user-initiated actions to accomplish operations such as importing clusters. It is crucial for Web Administration users to monitor and view the status of the actions they initiated. A user can view the following task information: The status of an initiated task whether completed or failed The details of all past and present cluster-wide initiated actions The timestamp of the initiated task Retrieve a specific task by using the available filters Figure 7.4. Tasks View Tasks can filtered by Task ID, Task name, the status of the task and the time interval. Note The Task details will remain in the Web Administration interface for not more than the default Time to live ( TTL ) of 2 days. Once the timespan has elapsed, the task details will be discarded from the system. 7.3.5. Admin and Users The Users tab lists all the users created to access the Web Administration interface. The interface provides user tasks such as adding, editing and deleting a user. For more user administration actions, see the Users and Roles Administration chapter of the Red Hat Gluster Storage Web Administration Monitoring Guide . 7.3.6. Alerts and User Settings To view system-wide notifications and to change the user password, a menubar is available at the top right corner of the interface. To view system-wide alerts, click on the bell icon at the top right menubar of the interface. Changing User Password To change the user password: Click on the user icon from the menu bar. Click My Settings . A dialog window is opened. Enter the new password twice and click Save . Note Email notifications are disabled by default. To enable, check the Email Notifications box. For email notifications configuration instructions, see the SMTP Notifications Configuration section of the Red Hat Gluster Storage Web Administration Monitoring Guide . Logging out from the interface To log out from the interface: Click on the user icon from the menu bar. Click Logout . | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/quick_start_guide/web_administration_interface_navigation |
Chapter 41. Master | Chapter 41. Master Only consumer is supported The Camel-Master endpoint provides a way to ensure only a single consumer in a cluster consumes from a given endpoint; with automatic failover if that JVM dies. This can be very useful if you need to consume from some legacy back end which either doesn't support concurrent consumption or due to commercial or stability reasons you can only have a single connection at any point in time. 41.1. Using the master endpoint Just prefix any camel endpoint with master:someName: where someName is a logical name and is used to acquire the master lock. e.g. from("master:cheese:jms:foo").to("activemq:wine"); In this example, there master component ensures that the route is only active in one node, at any given time, in the cluster. So if there are 8 nodes in the cluster, then the master component will elect one route to be the leader, and only this route will be active, and hence only this route will consume messages from jms:foo . In case this route is stopped or unexpected terminated, then the master component will detect this, and re-elect another node to be active, which will then become active and start consuming messages from jms:foo . Note Apache ActiveMQ 5.x has such feature out of the box called Exclusive Consumers . 41.2. URI format Where endpoint is any Camel endpoint you want to run in master/slave mode. 41.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 41.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 41.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 41.4. Component Options The Master component supports 4 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean service (advanced) Inject the service to use. CamelClusterService serviceSelector (advanced) Inject the service selector used to lookup the CamelClusterService to use. Selector 41.5. Endpoint Options The Master endpoint is configured using URI syntax: with the following path and query parameters: 41.5.1. Path Parameters (2 parameters) Name Description Default Type namespace (consumer) Required The name of the cluster namespace to use. String delegateUri (consumer) Required The endpoint uri to use in master/slave mode. String 41.5.2. Query Parameters (3 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern 41.6. Example You can protect a clustered Camel application to only consume files from one active node. // the file endpoint we want to consume from String url = "file:target/inbox?delete=true"; // use the camel master component in the clustered group named myGroup // to run a master/slave mode in the following Camel url from("master:myGroup:" + url) .log(name + " - Received file: USD{file:name}") .delay(delay) .log(name + " - Done file: USD{file:name}") .to("file:target/outbox"); The master component leverages CamelClusterService you can configure using Java ZooKeeperClusterService service = new ZooKeeperClusterService(); service.setId("camel-node-1"); service.setNodes("myzk:2181"); service.setBasePath("/camel/cluster"); context.addService(service) Xml (Spring/Blueprint) <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <bean id="cluster" class="org.apache.camel.component.zookeeper.cluster.ZooKeeperClusterService"> <property name="id" value="camel-node-1"/> <property name="basePath" value="/camel/cluster"/> <property name="nodes" value="myzk:2181"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring" autoStartup="false"> ... </camelContext> </beans> Spring boot camel.component.zookeeper.cluster.service.enabled = true camel.component.zookeeper.cluster.service.id = camel-node-1 camel.component.zookeeper.cluster.service.base-path = /camel/cluster camel.component.zookeeper.cluster.service.nodes = myzk:2181 41.7. Implementations Camel provides the following ClusterService implementations: camel-consul camel-file camel-infinispan camel-jgroups-raft camel-jgroups camel-kubernetes camel-zookeeper 41.8. Spring Boot Auto-Configuration When using master with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-master-starter</artifactId> </dependency> The component supports 5 options, which are listed below. Name Description Default Type camel.component.master.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.master.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.master.enabled Whether to enable auto configuration of the master component. This is enabled by default. Boolean camel.component.master.service Inject the service to use. The option is a org.apache.camel.cluster.CamelClusterService type. CamelClusterService camel.component.master.service-selector Inject the service selector used to lookup the CamelClusterService to use. The option is a org.apache.camel.cluster.CamelClusterService.Selector type. CamelClusterServiceUSDSelector | [
"from(\"master:cheese:jms:foo\").to(\"activemq:wine\");",
"master:namespace:endpoint[?options]",
"master:namespace:delegateUri",
"// the file endpoint we want to consume from String url = \"file:target/inbox?delete=true\"; // use the camel master component in the clustered group named myGroup // to run a master/slave mode in the following Camel url from(\"master:myGroup:\" + url) .log(name + \" - Received file: USD{file:name}\") .delay(delay) .log(name + \" - Done file: USD{file:name}\") .to(\"file:target/outbox\");",
"ZooKeeperClusterService service = new ZooKeeperClusterService(); service.setId(\"camel-node-1\"); service.setNodes(\"myzk:2181\"); service.setBasePath(\"/camel/cluster\"); context.addService(service)",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <bean id=\"cluster\" class=\"org.apache.camel.component.zookeeper.cluster.ZooKeeperClusterService\"> <property name=\"id\" value=\"camel-node-1\"/> <property name=\"basePath\" value=\"/camel/cluster\"/> <property name=\"nodes\" value=\"myzk:2181\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\" autoStartup=\"false\"> </camelContext> </beans>",
"camel.component.zookeeper.cluster.service.enabled = true camel.component.zookeeper.cluster.service.id = camel-node-1 camel.component.zookeeper.cluster.service.base-path = /camel/cluster camel.component.zookeeper.cluster.service.nodes = myzk:2181",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-master-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-master-component-starter |
Chapter 37. Element Substitution | Chapter 37. Element Substitution Abstract XML Schema substitution groups allow you to define a group of elements that can replace a top level, or head, element. This is useful in cases where you have multiple elements that share a common base type or with elements that need to be interchangeable. 37.1. Substitution Groups in XML Schema Overview A substitution group is a feature of XML schema that allows you to specify elements that can replace another element in documents generated from that schema. The replaceable element is called the head element and must be defined in the schema's global scope. The elements of the substitution group must be of the same type as the head element or a type that is derived from the head element's type. In essence, a substitution group allows you to build a collection of elements that can be specified using a generic element. For example, if you are building an ordering system for a company that sells three types of widgets you might define a generic widget element that contains a set of common data for all three widget types. Then you can define a substitution group that contains a more specific set of data for each type of widget. In your contract you can then specify the generic widget element as a message part instead of defining a specific ordering operation for each type of widget. When the actual message is built, the message can contain any of the elements of the substitution group. Syntax Substitution groups are defined using the substitutionGroup attribute of the XML Schema element element. The value of the substitutionGroup attribute is the name of the element that the element being defined replaces. For example, if your head element is widget , adding the attribute substitutionGroup="widget" to an element named woodWidget specifies that anywhere a widget element is used, you can substitute a woodWidget element. This is shown in Example 37.1, "Using a Substitution Group" . Example 37.1. Using a Substitution Group Type restrictions The elements of a substitution group must be of the same type as the head element or of a type derived from the head element's type. For example, if the head element is of type xsd:int all members of the substitution group must be of type xsd:int or of a type derived from xsd:int . You can also define a substitution group similar to the one shown in Example 37.2, "Substitution Group with Complex Types" where the elements of the substitution group are of types derived from the head element's type. Example 37.2. Substitution Group with Complex Types The head element of the substitution group, widget , is defined as being of type widgetType . Each element of the substitution group extends widgetType to include data that is specific to ordering that type of widget. Based on the schema in Example 37.2, "Substitution Group with Complex Types" , the part elements in Example 37.3, "XML Document using a Substitution Group" are valid. Example 37.3. XML Document using a Substitution Group Abstract head elements You can define an abstract head element that can never appear in a document produced using your schema. Abstract head elements are similar to abstract classes in Java because they are used as the basis for defining more specific implementations of a generic class. Abstract heads also prevent the use of the generic element in the final product. You declare an abstract head element by setting the abstract attribute of an element element to true , as shown in Example 37.4, "Abstract Head Definition" . Using this schema, a valid review element can contain either a positiveComment element or a negativeComment element, but cannot contain a comment element. Example 37.4. Abstract Head Definition 37.2. Substitution Groups in Java Overview Apache CXF, as specified in the JAXB specification, supports substitution groups using Java's native class hierarchy in combination with the ability of the JAXBElement class' support for wildcard definitions. Because the members of a substitution group must all share a common base type, the classes generated to support the elements' types also share a common base type. In addition, Apache CXF maps instances of the head element to JAXBElement<? extends T> properties. Generated object factory methods The object factory generated to support a package containing a substitution group has methods for each of the elements in the substitution group. For each of the members of the substitution group, except for the head element, the @XmlElementDecl annotation decorating the object factory method includes two additional properties, as described in Table 37.1, "Properties for Declaring a JAXB Element is a Member of a Substitution Group" . Table 37.1. Properties for Declaring a JAXB Element is a Member of a Substitution Group Property Description substitutionHeadNamespace Specifies the namespace where the head element is defined. substitutionHeadName Specifies the value of the head element's name attribute. The object factory method for the head element of the substitution group's @XmlElementDecl contains only the default namespace property and the default name property. In addition to the element instantiation methods, the object factory contains a method for instantiating an object representing the head element. If the members of the substitution group are all of complex types, the object factory also contains methods for instantiating instances of each complex type used. Example 37.5, "Object Factory Method for a Substitution Group" shows the object factory method for the substitution group defined in Example 37.2, "Substitution Group with Complex Types" . Example 37.5. Object Factory Method for a Substitution Group Substitution groups in interfaces If the head element of a substitution group is used as a message part in one of an operation's messages, the resulting method parameter will be an object of the class generated to support that element. It will not necessarily be an instance of the JAXBElement<? extends T> class. The runtime relies on Java's native type hierarchy to support the type substitution, and Java will catch any attempts to use unsupported types. To ensure that the runtime knows all of the classes needed to support the element substitution, the SEI is decorated with the @XmlSeeAlso annotation. This annotation specifies a list of classes required by the runtime for marshalling. Fore more information on using the @XmlSeeAlso annotation see Section 32.4, "Adding Classes to the Runtime Marshaller" . Example 37.7, "Generated Interface Using a Substitution Group" shows the SEI generated for the interface shown in Example 37.6, "WSDL Interface Using a Substitution Group" . The interface uses the substitution group defined in Example 37.2, "Substitution Group with Complex Types" . Example 37.6. WSDL Interface Using a Substitution Group Example 37.7. Generated Interface Using a Substitution Group The SEI shown in Example 37.7, "Generated Interface Using a Substitution Group" lists the object factory in the @XmlSeeAlso annotation. Listing the object factory for a namespace provides access to all of the generated classes for that namespace. Substitution groups in complex types When the head element of a substitution group is used as an element in a complex type, the code generator maps the element to a JAXBElement<? extends T> property. It does not map it to a property containing an instance of the generated class generated to support the substitution group. For example, the complex type defined in Example 37.8, "Complex Type Using a Substitution Group" results in the Java class shown in Example 37.9, "Java Class for a Complex Type Using a Substitution Group" . The complex type uses the substitution group defined in Example 37.2, "Substitution Group with Complex Types" . Example 37.8. Complex Type Using a Substitution Group Example 37.9. Java Class for a Complex Type Using a Substitution Group Setting a substitution group property How you work with a substitution group depends on whether the code generator mapped the group to a straight Java class or to a JAXBElement<? extends T> class. When the element is simply mapped to an object of the generated value class, you work with the object the same way you work with other Java objects that are part of a type hierarchy. You can substitute any of the subclasses for the parent class. You can inspect the object to determine its exact class, and cast it appropriately. The JAXB specification recommends that you use the object factory methods for instantiating objects of the generated classes. When the code generators create a JAXBElement<? extends T> object to hold instances of a substitution group, you must wrap the element's value in a JAXBElement<? extends T> object. The best method to do this is to use the element creation methods provided by the object factory. They provide an easy means for creating an element based on its value. Example 37.10, "Setting a Member of a Substitution Group" shows code for setting an instance of a substitution group. Example 37.10. Setting a Member of a Substitution Group The code in Example 37.10, "Setting a Member of a Substitution Group" does the following: Instantiates an object factory. Instantiates a PlasticWidgetType object. Instantiates a JAXBElement<PlasticWidgetType> object to hold a plastic widget element. Instantiates a WidgetOrderInfo object. Sets the WidgetOrderInfo object's widget to the JAXBElement object holding the plastic widget element. Getting the value of a substitution group property The object factory methods do not help when extracting the element's value from a JAXBElement<? extends T> object. You must to use the JAXBElement<? extends T> object's getValue() method. The following options determine the type of object returned by the getValue() method: Use the isInstance() method of all the possible classes to determine the class of the element's value object. Use the JAXBElement<? extends T> object's getName() method to determine the element's name. The getName() method returns a QName. Using the local name of the element, you can determine the proper class for the value object. Use the JAXBElement<? extends T> object's getDeclaredType() method to determine the class of the value object. The getDeclaredType() method returns the Class object of the element's value object. Warning There is a possibility that the getDeclaredType() method will return the base class for the head element regardless of the actual class of the value object. Example 37.11, "Getting the Value of a Member of the Substitution Group" shows code retrieving the value from a substitution group. To determine the proper class of the element's value object the example uses the element's getName() method. Example 37.11. Getting the Value of a Member of the Substitution Group 37.3. Widget Vendor Example 37.3.1. Widget Ordering Interface This section shows an example of substitution groups being used in Apache CXF to solve a real world application. A service and consumer are developed using the widget substitution group defined in Example 37.2, "Substitution Group with Complex Types" . The service offers two operations: checkWidgets and placeWidgetOrder . Example 37.12, "Widget Ordering Interface" shows the interface for the ordering service. Example 37.12. Widget Ordering Interface Example 37.13, "Widget Ordering SEI" shows the generated Java SEI for the interface. Example 37.13. Widget Ordering SEI Note Because the example only demonstrates the use of substitution groups, some of the business logic is not shown. 37.3.2. The checkWidgets Operation Overview checkWidgets is a simple operation that has a parameter that is the head member of a substitution group. This operation demonstrates how to deal with individual parameters that are members of a substitution group. The consumer must ensure that the parameter is a valid member of the substitution group. The service must properly determine which member of the substitution group was sent in the request. Consumer implementation The generated method signature uses the Java class supporting the type of the substitution group's head element. Because the member elements of a substitution group are either of the same type as the head element or of a type derived from the head element's type, the Java classes generated to support the members of the substitution group inherit from the Java class generated to support the head element. Java's type hierarchy natively supports using subclasses in place of the parent class. Because of how Apache CXF generates the types for a substitution group and Java's type hierarchy, the client can invoke checkWidgets() without using any special code. When developing the logic to invoke checkWidgets() you can pass in an object of one of the classes generated to support the widget substitution group. Example 37.14, "Consumer Invoking checkWidgets() " shows a consumer invoking checkWidgets() . Example 37.14. Consumer Invoking checkWidgets() Service implementation The service's implementation of checkWidgets() gets a widget description as a WidgetType object, checks the inventory of widgets, and returns the number of widgets in stock. Because all of the classes used to implement the substitution group inherit from the same base class, you can implement checkWidgets() without using any JAXB specific APIs. All of the classes generated to support the members of the substitution group for widget extend the WidgetType class. Because of this fact, you can use instanceof to determine what type of widget was passed in and simply cast the widgetPart object into the more restrictive type if appropriate. Once you have the proper type of object, you can check the inventory of the right kind of widget. Example 37.15, "Service Implementation of checkWidgets() " shows a possible implementation. Example 37.15. Service Implementation of checkWidgets() 37.3.3. The placeWidgetOrder Operation Overview placeWidgetOrder uses two complex types containing the substitution group. This operation demonstrates to use such a structure in a Java implementation. Both the consumer and the service must get and set members of a substitution group. Consumer implementation To invoke placeWidgetOrder() the consumer must construct a widget order containing one element of the widget substitution group. When adding the widget to the order, the consumer should use the object factory methods generated for each element of the substitution group. This ensures that the runtime and the service can correctly process the order. For example, if an order is being placed for a plastic widget, the ObjectFactory.createPlasticWidget() method is used to create the element before adding it to the order. Example 37.16, "Setting a Substitution Group Member" shows consumer code for setting the widget property of the WidgetOrderInfo object. Example 37.16. Setting a Substitution Group Member Service implementation The placeWidgetOrder() method receives an order in the form of a WidgetOrderInfo object, processes the order, and returns a bill to the consumer in the form of a WidgetOrderBillInfo object. The orders can be for a plain widget, a plastic widget, or a wooden widget. The type of widget ordered is determined by what type of object is stored in widgetOrderForm object's widget property. The widget property is a substitution group and can contain a widget element, a woodWidget element, or a plasticWidget element. The implementation must determine which of the possible elements is stored in the order. This can be accomplished using the JAXBElement<? extends T> object's getName() method to determine the element's QName. The QName can then be used to determine which element in the substitution group is in the order. Once the element included in the bill is known, you can extract its value into the proper type of object. Example 37.17, "Implementation of placeWidgetOrder() " shows a possible implementation. Example 37.17. Implementation of placeWidgetOrder() The code in Example 37.17, "Implementation of placeWidgetOrder() " does the following: Instantiates an object factory to create elements. Instantiates a WidgetOrderBillInfo object to hold the bill. Gets the number of widgets ordered. Gets the local name of the element stored in the order. Checks to see if the element is a woodWidget element. Extracts the value of the element from the order to the proper type of object. Creates a JAXBElement<T> object placed into the bill. Sets the bill object's widget property. Sets the bill object's amountDue property. | [
"<element name=\"widget\" type=\"xsd:string\" /> <element name=\"woodWidget\" type=\"xsd:string\" substitutionGroup=\"widget\" />",
"<complexType name=\"widgetType\"> <sequence> <element name=\"shape\" type=\"xsd:string\" /> <element name=\"color\" type=\"xsd:string\" /> </sequence> </complexType> <complexType name=\"woodWidgetType\"> <complexContent> <extension base=\"widgetType\"> <sequence> <element name=\"woodType\" type=\"xsd:string\" /> </sequence> </extension> </complexContent> </complexType> <complexType name=\"plasticWidgetType\"> <complexContent> <extension base=\"widgetType\"> <sequence> <element name=\"moldProcess\" type=\"xsd:string\" /> </sequence> </extension> </complexContent> </complexType> <element name=\"widget\" type=\"widgetType\" /> <element name=\"woodWidget\" type=\"woodWidgetType\" substitutionGroup=\"widget\" /> <element name=\"plasticWidget\" type=\"plasticWidgetType\" substitutionGroup=\"widget\" /> <complexType name=\"partType\"> <sequence> <element ref=\"widget\" /> </sequence> </complexType> <element name=\"part\" type=\"partType\" />",
"<part> <widget> <shape>round</shape> <color>blue</color> </widget> </part> <part> <plasticWidget> <shape>round</shape> <color>blue</color> <moldProcess>sandCast</moldProcess> </plasticWidget> </part> <part> <woodWidget> <shape>round</shape> <color>blue</color> <woodType>elm</woodType> </woodWidget> </part>",
"<element name=\"comment\" type=\"xsd:string\" abstract=\"true\" /> <element name=\"positiveComment\" type=\"xsd:string\" substitutionGroup=\"comment\" /> <element name=\"negtiveComment\" type=\"xsd:string\" substitutionGroup=\"comment\" /> <element name=\"review\"> <complexContent> <all> <element name=\"custName\" type=\"xsd:string\" /> <element name=\"impression\" ref=\"comment\" /> </all> </complexContent> </element>",
"public class ObjectFactory { private final static QName _Widget_QNAME = new QName(...); private final static QName _PlasticWidget_QNAME = new QName(...); private final static QName _WoodWidget_QNAME = new QName(...); public ObjectFactory() { } public WidgetType createWidgetType() { return new WidgetType(); } public PlasticWidgetType createPlasticWidgetType() { return new PlasticWidgetType(); } public WoodWidgetType createWoodWidgetType() { return new WoodWidgetType(); } @XmlElementDecl(namespace=\"...\", name = \"widget\") public JAXBElement<WidgetType> createWidget(WidgetType value) { return new JAXBElement<WidgetType>(_Widget_QNAME, WidgetType.class, null, value); } @XmlElementDecl(namespace = \"...\", name = \"plasticWidget\", substitutionHeadNamespace = \"...\", substitutionHeadName = \"widget\") public JAXBElement<PlasticWidgetType> createPlasticWidget(PlasticWidgetType value) { return new JAXBElement<PlasticWidgetType>(_PlasticWidget_QNAME, PlasticWidgetType.class, null, value); } @XmlElementDecl(namespace = \"...\", name = \"woodWidget\", substitutionHeadNamespace = \"...\", substitutionHeadName = \"widget\") public JAXBElement<WoodWidgetType> createWoodWidget(WoodWidgetType value) { return new JAXBElement<WoodWidgetType>(_WoodWidget_QNAME, WoodWidgetType.class, null, value); } }",
"<message name=\"widgetMessage\"> <part name=\"widgetPart\" element=\"xsd1:widget\" /> </message> <message name=\"numWidgets\"> <part name=\"numInventory\" type=\"xsd:int\" /> </message> <message name=\"badSize\"> <part name=\"numInventory\" type=\"xsd:int\" /> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\" /> <output message=\"tns:widgetOrderBill\" name=\"bill\" /> <fault message=\"tns:badSize\" name=\"sizeFault\" /> </operation> <operation name=\"checkWidgets\"> <input message=\"tns:widgetMessage\" name=\"request\" /> <output message=\"tns:numWidgets\" name=\"response\" /> </operation> </portType>",
"@WebService(targetNamespace = \"...\", name = \"orderWidgets\") @XmlSeeAlso({com.widgetvendor.types.widgettypes.ObjectFactory.class}) public interface OrderWidgets { @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) @WebResult(name = \"numInventory\", targetNamespace = \"\", partName = \"numInventory\") @WebMethod public int checkWidgets( @WebParam(partName = \"widgetPart\", name = \"widget\", targetNamespace = \"...\") com.widgetvendor.types.widgettypes.WidgetType widgetPart ); }",
"<complexType name=\"widgetOrderInfo\"> <sequence> <element name=\"amount\" type=\"xsd:int\"/> <element ref=\"xsd1:widget\"/> </sequence> </complexType>",
"@XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = \"widgetOrderInfo\", propOrder = {\"amount\",\"widget\",}) public class WidgetOrderInfo { protected int amount; @XmlElementRef(name = \"widget\", namespace = \"...\", type = JAXBElement.class) protected JAXBElement<? extends WidgetType> widget; public int getAmount() { return amount; } public void setAmount(int value) { this.amount = value; } public JAXBElement<? extends WidgetType> getWidget() { return widget; } public void setWidget(JAXBElement<? extends WidgetType> value) { this.widget = ((JAXBElement<? extends WidgetType> ) value); } }",
"ObjectFactory of = new ObjectFactory(); PlasticWidgetType pWidget = of.createPlasticWidgetType(); pWidget.setShape = \"round'; pWidget.setColor = \"green\"; pWidget.setMoldProcess = \"injection\"; JAXBElement<PlasticWidgetType> widget = of.createPlasticWidget(pWidget); WidgetOrderInfo order = of.createWidgetOrderInfo(); order.setWidget(widget);",
"String elementName = order.getWidget().getName().getLocalPart(); if (elementName.equals(\"woodWidget\") { WoodWidgetType widget=order.getWidget().getValue(); } else if (elementName.equals(\"plasticWidget\") { PlasticWidgetType widget=order.getWidget().getValue(); } else { WidgetType widget=order.getWidget().getValue(); }",
"<message name=\"widgetOrder\"> <part name=\"widgetOrderForm\" type=\"xsd1:widgetOrderInfo\"/> </message> <message name=\"widgetOrderBill\"> <part name=\"widgetOrderConformation\" type=\"xsd1:widgetOrderBillInfo\"/> </message> <message name=\"widgetMessage\"> <part name=\"widgetPart\" element=\"xsd1:widget\" /> </message> <message name=\"numWidgets\"> <part name=\"numInventory\" type=\"xsd:int\" /> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\"/> <output message=\"tns:widgetOrderBill\" name=\"bill\"/> </operation> <operation name=\"checkWidgets\"> <input message=\"tns:widgetMessage\" name=\"request\" /> <output message=\"tns:numWidgets\" name=\"response\" /> </operation> </portType>",
"@WebService(targetNamespace = \"http://widgetVendor.com/widgetOrderForm\", name = \"orderWidgets\") @XmlSeeAlso({com.widgetvendor.types.widgettypes.ObjectFactory.class}) public interface OrderWidgets { @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) @WebResult(name = \"numInventory\", targetNamespace = \"\", partName = \"numInventory\") @WebMethod public int checkWidgets( @WebParam(partName = \"widgetPart\", name = \"widget\", targetNamespace = \"http://widgetVendor.com/types/widgetTypes\") com.widgetvendor.types.widgettypes.WidgetType widgetPart ); @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) @WebResult(name = \"widgetOrderConformation\", targetNamespace = \"\", partName = \"widgetOrderConformation\") @WebMethod public com.widgetvendor.types.widgettypes.WidgetOrderBillInfo placeWidgetOrder( @WebParam(partName = \"widgetOrderForm\", name = \"widgetOrderForm\", targetNamespace = \"\") com.widgetvendor.types.widgettypes.WidgetOrderInfo widgetOrderForm ) throws BadSize; }",
"System.out.println(\"What type of widgets do you want to order?\"); System.out.println(\"1 - Normal\"); System.out.println(\"2 - Wood\"); System.out.println(\"3 - Plastic\"); System.out.println(\"Selection [1-3]\"); String selection = reader.readLine(); String trimmed = selection.trim(); char widgetType = trimmed.charAt(0); switch (widgetType) { case '1': { WidgetType widget = new WidgetType(); break; } case '2': { WoodWidgetType widget = new WoodWidgetType(); break; } case '3': { PlasticWidgetType widget = new PlasticWidgetType(); break; } default : System.out.println(\"Invaid Widget Selection!!\"); } proxy.checkWidgets(widgets);",
"public int checkWidgets(WidgetType widgetPart) { if (widgetPart instanceof WidgetType) { return checkWidgetInventory(widgetType); } else if (widgetPart instanceof WoodWidgetType) { WoodWidgetType widget = (WoodWidgetType)widgetPart; return checkWoodWidgetInventory(widget); } else if (widgetPart instanceof PlasticWidgetType) { PlasticWidgetType widget = (PlasticWidgetType)widgetPart; return checkPlasticWidgetInventory(widget); } }",
"ObjectFactory of = new ObjectFactory(); WidgetOrderInfo order = new of.createWidgetOrderInfo(); System.out.println(); System.out.println(\"What color widgets do you want to order?\"); String color = reader.readLine(); System.out.println(); System.out.println(\"What shape widgets do you want to order?\"); String shape = reader.readLine(); System.out.println(); System.out.println(\"What type of widgets do you want to order?\"); System.out.println(\"1 - Normal\"); System.out.println(\"2 - Wood\"); System.out.println(\"3 - Plastic\"); System.out.println(\"Selection [1-3]\"); String selection = reader.readLine(); String trimmed = selection.trim(); char widgetType = trimmed.charAt(0); switch (widgetType) { case '1': { WidgetType widget = of.createWidgetType(); widget.setColor(color); widget.setShape(shape); JAXB<WidgetType> widgetElement = of.createWidget(widget); order.setWidget(widgetElement); break; } case '2': { WoodWidgetType woodWidget = of.createWoodWidgetType(); woodWidget.setColor(color); woodWidget.setShape(shape); System.out.println(); System.out.println(\"What type of wood are your widgets?\"); String wood = reader.readLine(); woodWidget.setWoodType(wood); JAXB<WoodWidgetType> widgetElement = of.createWoodWidget(woodWidget); order.setWoodWidget(widgetElement); break; } case '3': { PlasticWidgetType plasticWidget = of.createPlasticWidgetType(); plasticWidget.setColor(color); plasticWidget.setShape(shape); System.out.println(); System.out.println(\"What type of mold to use for your widgets?\"); String mold = reader.readLine(); plasticWidget.setMoldProcess(mold); JAXB<WidgetType> widgetElement = of.createPlasticWidget(plasticWidget); order.setPlasticWidget(widgetElement); break; } default : System.out.println(\"Invaid Widget Selection!!\"); }",
"public com.widgetvendor.types.widgettypes.WidgetOrderBillInfo placeWidgetOrder(WidgetOrderInfo widgetOrderForm) { ObjectFactory of = new ObjectFactory(); WidgetOrderBillInfo bill = new WidgetOrderBillInfo() // Copy the shipping address and the number of widgets // ordered from widgetOrderForm to bill int numOrdered = widgetOrderForm.getAmount(); String elementName = widgetOrderForm.getWidget().getName().getLocalPart(); if (elementName.equals(\"woodWidget\") { WoodWidgetType widget=order.getWidget().getValue(); buildWoodWidget(widget, numOrdered); // Add the widget info to bill JAXBElement<WoodWidgetType> widgetElement = of.createWoodWidget(widget); bill.setWidget(widgetElement); float amtDue = numOrdered * 0.75; bill.setAmountDue(amtDue); } else if (elementName.equals(\"plasticWidget\") { PlasticWidgetType widget=order.getWidget().getValue(); buildPlasticWidget(widget, numOrdered); // Add the widget info to bill JAXBElement<PlasticWidgetType> widgetElement = of.createPlasticWidget(widget); bill.setWidget(widgetElement); float amtDue = numOrdered * 0.90; bill.setAmountDue(amtDue); } else { WidgetType widget=order.getWidget().getValue(); buildWidget(widget, numOrdered); // Add the widget info to bill JAXBElement<WidgetType> widgetElement = of.createWidget(widget); bill.setWidget(widgetElement); float amtDue = numOrdered * 0.30; bill.setAmountDue(amtDue); } return(bill); }"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXWSElementSubstitution |
7.8. GET and PUT Usage in Distribution Mode | 7.8. GET and PUT Usage in Distribution Mode In distribution mode, the cache performs a remote GET command before a write command. This occurs because certain methods (for example, Cache.put() ) return the value associated with the specified key according to the java.util.Map contract. When this is performed on an instance that does not own the key and the entry is not found in the L1 cache, the only reliable way to elicit this return value is to perform a remote GET before the PUT . The GET operation that occurs before the PUT operation is always synchronous, whether the cache is synchronous or asynchronous, because Red Hat JBoss Data Grid must wait for the return value. Report a bug 7.8.1. Distributed GET and PUT Operation Resource Usage In distribution mode, the cache may execute a GET operation before executing the desired PUT operation. This operation is very expensive in terms of resources. Despite operating in an synchronous manner, a remote GET operation does not wait for all responses, which would result in wasted resources. The GET process accepts the first valid response received, which allows its performance to be unrelated to cluster size. Use the Flag.SKIP_REMOTE_LOOKUP flag for a per-invocation setting if return values are not required for your implementation. Such actions do not impair cache operations and the accurate functioning of all public methods, but do break the java.util.Map interface contract. The contract breaks because unreliable and inaccurate return values are provided to certain methods. As a result, ensure that these return values are not used for any important purpose on your configuration. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-get_and_put_usage_in_distribution_mode |
Chapter 58. JmxTransOutputDefinitionTemplate schema reference | Chapter 58. JmxTransOutputDefinitionTemplate schema reference Used in: JmxTransSpec Property Description outputType Template for setting the format of the data that will be pushed.For more information see JmxTrans OutputWriters . string host The DNS/hostname of the remote host that the data is pushed to. string port The port of the remote host that the data is pushed to. integer flushDelayInSeconds How many seconds the JmxTrans waits before pushing a new set of data out. integer typeNames Template for filtering data to be included in response to a wildcard query. For more information see JmxTrans queries . string array name Template for setting the name of the output definition. This is used to identify where to send the results of queries should be sent. string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-JmxTransOutputDefinitionTemplate-reference |
20.44. Disk I/O Throttling | 20.44. Disk I/O Throttling The virsh blkdeviotune command sets disk I/O throttling for a specified guest virtual machine. This can prevent a guest virtual machine from over utilizing shared resources and thus impacting the performance of other guest virtual machines. The following format should be used: The only required parameter is the domain name of the guest virtual machine. To list the domain name, run the virsh domblklist command. The --config , --live , and --current arguments function the same as in Section 20.43, "Setting Schedule Parameters" . If no limit is specified, it will query current I/O limits setting. Otherwise, alter the limits with the following flags: --total-bytes-sec - specifies total throughput limit in bytes per second. --read-bytes-sec - specifies read throughput limit in bytes per second. --write-bytes-sec - specifies write throughput limit in bytes per second. --total-iops-sec - specifies total I/O operations limit per second. --read-iops-sec - specifies read I/O operations limit per second. --write-iops-sec - specifies write I/O operations limit per second. For more information, see the blkdeviotune section of the virsh man page. For an example domain XML see Figure 23.27, "Devices - Hard drives, floppy disks, CD-ROMs Example" . | [
"virsh blkdeviotune domain < device > [[--config] [--live] | [--current]] [[total-bytes-sec] | [read-bytes-sec] [write-bytes-sec]] [[total-iops-sec] [read-iops-sec] [write-iops-sec]]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Disk_IO_throttling |
Chapter 1. Using image mode for RHEL with MicroShift | Chapter 1. Using image mode for RHEL with MicroShift You can embed MicroShift into an operating system image using image mode for Red Hat Enterprise Linux (RHEL). Important Image mode for RHEL is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.1. Image mode for Red Hat Enterprise Linux (RHEL) Image mode for Red Hat Enterprise Linux (RHEL) is a Technology Preview deployment method that uses a container-native approach to build, deploy, and manage the operating system as a bootc image. By using bootc, you can build, deploy, and manage the operating system as if it is any other container. This container image uses standard OCI or Docker containers as a transport and delivery format for base operating system updates. A bootc image includes a Linux kernel that is used to start the operating system. By using bootc containers, developers, operations administrators, and solution providers can all use the same container-native tools and techniques. Image mode splits the creation and installation of software changes into two steps: one on a build system and one on a running target system. In the build-system step, a Podman build inspects the RPM files available for installation, determines any dependencies, and creates an ordered list of chained steps to complete, with the end result being a new operating system available to install. In the running-target-system step, a bootc update downloads, unpacks, and makes the new operating system bootable alongside the currently running system. Local configuration changes are carried forward to the new operating system, but do not take effect until the system is rebooted and the new operating system image replaces the running image. 1.1.1. Using image mode for RHEL with MicroShift To use image mode for RHEL, ensure that the following resources are available: A RHEL 9.4 host with an active Red Hat subscription for building MicroShift bootc images. A remote registry for storing and accessing bootc images. You can use image mode for RHEL with a MicroShift cluster on AArch64 or x86_64 system architectures. The workflow for using image mode with MicroShift includes the following steps: Build the MicroShift bootc image. Publish the image. Run the image. This step includes configuring MicroShift networking and storage. Important The rpm-ostree file system is not supported in image mode and must not be used to make changes to deployments that use image mode. 1.2. Building the bootc image Build your Red Hat Enterprise Linux (RHEL) that contains MicroShift as a bootable container image by using a Containerfile. Important Image mode for RHEL is Technology Preview. Using a bootc image in production environments is not supported. Prerequisites A Red Hat Enterprise Linux (RHEL) 9.4 host with an active Red Hat subscription for building MicroShift bootc images and running containers. You are logged into the RHEL 9.4 host using the user credentials that have sudo permissions. The rhocp and fast-datapath repositories are accessible in the host subscription. The repositories do not necessarily need to be enabled on the host. You have a remote registry such as Red Hat quay for storing and accessing bootc images. Procedure Create a Containerfile that includes the following instructions: Example Containerfile for RHEL image mode FROM registry.redhat.io/rhel9/rhel-bootc:9.4 ARG USHIFT_VER=4.17 RUN dnf config-manager \ --set-enabled rhocp-USD{USHIFT_VER}-for-rhel-9-USD(uname -m)-rpms \ --set-enabled fast-datapath-for-rhel-9-USD(uname -m)-rpms RUN dnf install -y firewalld microshift && \ systemctl enable microshift && \ dnf clean all # Create a default 'redhat' user with the specified password. # Add it to the 'wheel' group to allow for running sudo commands. ARG USER_PASSWD RUN if [ -z "USD{USER_PASSWD}" ] ; then \ echo USER_PASSWD is a mandatory build argument && exit 1 ; \ fi RUN useradd -m -d /var/home/redhat -G wheel redhat && \ echo "redhat:USD{USER_PASSWD}" | chpasswd # Mandatory firewall configuration RUN firewall-offline-cmd --zone=public --add-port=22/tcp && \ firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 && \ firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 # Create a systemd unit to recursively make the root filesystem subtree # shared as required by OVN images RUN cat > /etc/systemd/system/microshift-make-rshared.service <<'EOF' [Unit] Description=Make root filesystem shared Before=microshift.service ConditionVirtualization=container [Service] Type=oneshot ExecStart=/usr/bin/mount --make-rshared / [Install] WantedBy=multi-user.target EOF RUN systemctl enable microshift-make-rshared.service Important Podman uses the host subscription information and repositories inside the container when building the container image. If the rhocp and fast-datapath repositories are not available on the host, the build fails. Create a local bootc image by running the following image build command: PULL_SECRET=~/.pull-secret.json USER_PASSWD=<your_redhat_user_password> 1 IMAGE_NAME=microshift-4.17-bootc USD sudo podman build --authfile "USD{PULL_SECRET}" -t "USD{IMAGE_NAME}" \ --build-arg USER_PASSWD="USD{USER_PASSWD}" \ -f Containerfile 1 Replace <your_redhat_user_password> with your password. Note How secrets are used during the image build: The podman --authfile argument is required to pull the base rhel-bootc:9.4 image from the registry.redhat.io registry. The build USER_PASSWD argument is used to set a password for the redhat user. Verification Verify that the local bootc image was created by running the following command: USD sudo podman images "USD{IMAGE_NAME}" Example output REPOSITORY TAG IMAGE ID CREATED SIZE localhost/microshift-4.17-bootc latest 193425283c00 2 minutes ago 2.31 GB 1.3. Publishing the bootc image to the remote registry Publish your bootc image to the remote registry so that the image can be used for running the container on another host, or for when you want to install a new operating system with the bootc image layer. Prerequisites You are logged in to the RHEL 9.4 host where the image was built using the user credentials that have sudo permissions. You have a remote registry such as Red Hat quay for storing and accessing bootc images. You created the Containerfile and built the image. Procedure Log in to your remote registry by running the following command: REGISTRY_URL=quay.io USD sudo podman login "USD{REGISTRY_URL}" 1 1 Replace REGISTRY_URL with the URL for your registry. Publish the image by running the following command: REGISTRY_IMG=<myorg/mypath>/"USD{IMAGE_NAME}" 1 2 IMAGE_NAME=<microshift-4.17-bootc> 3 USD sudo podman push localhost/"USD{IMAGE_NAME}" "USD{REGISTRY_URL}/USD{REGISTRY_IMG}" 1 Replace <myorg/mypath> with your remote registry organization name and path. 2 Replace <microshift-4.17-bootc> with the name of the image you want to publish. Verification Run the container using the image you pushed to your registry as described in the "Running the MicroShift bootc container" section. 1.4. Additional resources Image mode for Red Hat Enterprise Linux learning exercises Using image mode for RHEL to build, deploy, and manage operating systems | [
"FROM registry.redhat.io/rhel9/rhel-bootc:9.4 ARG USHIFT_VER=4.17 RUN dnf config-manager --set-enabled rhocp-USD{USHIFT_VER}-for-rhel-9-USD(uname -m)-rpms --set-enabled fast-datapath-for-rhel-9-USD(uname -m)-rpms RUN dnf install -y firewalld microshift && systemctl enable microshift && dnf clean all Create a default 'redhat' user with the specified password. Add it to the 'wheel' group to allow for running sudo commands. ARG USER_PASSWD RUN if [ -z \"USD{USER_PASSWD}\" ] ; then echo USER_PASSWD is a mandatory build argument && exit 1 ; fi RUN useradd -m -d /var/home/redhat -G wheel redhat && echo \"redhat:USD{USER_PASSWD}\" | chpasswd Mandatory firewall configuration RUN firewall-offline-cmd --zone=public --add-port=22/tcp && firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 && firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 Create a systemd unit to recursively make the root filesystem subtree shared as required by OVN images RUN cat > /etc/systemd/system/microshift-make-rshared.service <<'EOF' [Unit] Description=Make root filesystem shared Before=microshift.service ConditionVirtualization=container [Service] Type=oneshot ExecStart=/usr/bin/mount --make-rshared / [Install] WantedBy=multi-user.target EOF RUN systemctl enable microshift-make-rshared.service",
"PULL_SECRET=~/.pull-secret.json USER_PASSWD=<your_redhat_user_password> 1 IMAGE_NAME=microshift-4.17-bootc sudo podman build --authfile \"USD{PULL_SECRET}\" -t \"USD{IMAGE_NAME}\" --build-arg USER_PASSWD=\"USD{USER_PASSWD}\" -f Containerfile",
"sudo podman images \"USD{IMAGE_NAME}\"",
"REPOSITORY TAG IMAGE ID CREATED SIZE localhost/microshift-4.17-bootc latest 193425283c00 2 minutes ago 2.31 GB",
"REGISTRY_URL=quay.io sudo podman login \"USD{REGISTRY_URL}\" 1",
"REGISTRY_IMG=<myorg/mypath>/\"USD{IMAGE_NAME}\" 1 2 IMAGE_NAME=<microshift-4.17-bootc> 3 sudo podman push localhost/\"USD{IMAGE_NAME}\" \"USD{REGISTRY_URL}/USD{REGISTRY_IMG}\""
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/installing_with_rhel_image_mode/installing-with-rhel-image-mode |
23.19. Storage Volumes | 23.19. Storage Volumes A storage volume will generally be either a file or a device node; since 1.2.0, an optional output-only attribute type lists the actual type (file, block, dir, network, or netdir), 23.19.1. General Metadata The top section of the <volume> element contains information known as metadata as shown in this XML example: ... <volume type='file'> <name>sparse.img</name> <key>/var/lib/libvirt/images/sparse.img</key> <allocation>0</allocation> <capacity unit="T">1</capacity> ... </volume> Figure 23.83. General metadata for storage volumes The table ( Table 23.30, "Volume child elements" ) explains the child elements that are valid for the parent <volume> element: Table 23.30. Volume child elements Element Description <name> Provides a name for the storage volume which is unique to the storage pool. This is mandatory when defining a storage volume. <key> Provides an identifier for the storage volume which identifies a single storage volume. In some cases it is possible to have two distinct keys identifying a single storage volume. This field cannot be set when creating a storage volume as it is always generated. <allocation> Provides the total storage allocation for the storage volume. This may be smaller than the logical capacity if the storage volume is sparsely allocated. It may also be larger than the logical capacity if the storage volume has substantial metadata overhead. This value is in bytes. If omitted when creating a storage volume, the storage volume will be fully allocated at time of creation. If set to a value smaller than the capacity, the storage pool has the option of deciding to sparsely allocate a storage volume or not. Different types of storage pools may treat sparse storage volumes differently. For example, a logical pool will not automatically expand a storage volume's allocation when it gets full; the user is responsible for configuring it or configuring dmeventd to do so automatically. By default, this is specified in bytes. See Note <capacity> Provides the logical capacity for the storage volume. This value is in bytes by default, but a <unit> attribute can be specified with the same semantics as for <allocation> described in Note . This is compulsory when creating a storage volume. <source> Provides information about the underlying storage allocation of the storage volume. This may not be available for some storage pool types. <target> Provides information about the representation of the storage volume on the local host physical machine. Note When necessary, an optional attribute unit can be specified to adjust the passed value. This attribute can be used with the elements <allocation> and <capacity> . Accepted values for the attribute unit include: B or bytes for bytes KB for kilobytes K or KiB for kibibytes MB for megabytes M or MiB for mebibytes GB for gigabytes G or GiB for gibibytes TB for terabytes T or TiB for tebibytes PB for petabytes P or PiB for pebibytes EB for exabytes E or EiB for exbibytes 23.19.2. Setting Target Elements The <target> element can be placed in the <volume> top level element. It is used to describe the mapping that is done on the storage volume into the host physical machine filesystem. This element can take the following child elements: <target> <path>/var/lib/libvirt/images/sparse.img</path> <format type='qcow2'/> <permissions> <owner>107</owner> <group>107</group> <mode>0744</mode> <label>virt_image_t</label> </permissions> <compat>1.1</compat> <features> <lazy_refcounts/> </features> </target> Figure 23.84. Target child elements The specific child elements for <target> are explained in Table 23.31, "Target child elements" : Table 23.31. Target child elements Element Description <path> Provides the location at which the storage volume can be accessed on the local filesystem, as an absolute path. This is a read-only attribute, and should not be specified when creating a volume. <format> Provides information about the pool specific volume format. For disk-based storage pools, it will provide the partition type. For filesystem or directory-based storage pools, it will provide the file format type, (such as cow, qcow, vmdk, raw). If omitted when creating a storage volume, the storage pool's default format will be used. The actual format is specified by the type attribute. See the sections on the specific storage pools in Section 13.2, "Using Storage Pools" for the list of valid values. <permissions> Provides information about the default permissions to use when creating storage volumes. This is currently only useful for directory or filesystem-based storage pools, where the storage volumes allocated are simple files. For storage pools where the storage volumes are device nodes, the hot-plug scripts determine permissions. It contains four child elements. The <mode> element contains the octal permission set. The <owner> element contains the numeric user ID. The <group> element contains the numeric group ID. The <label> element contains the MAC (for example, SELinux) label string. <compat> Specify compatibility level. So far, this is only used for <type='qcow2'> volumes. Valid values are <compat> 0.10 </compat> for qcow2 (version 2) and <compat> 1.1 </compat> for qcow2 (version 3) so far for specifying the QEMU version the images should be compatible with. If the <feature> element is present, <compat> 1.1 </compat> is used. If omitted, qemu-img default is used. <features> Format-specific features. Presently is only used with <format type='qcow2'/> (version 3). Valid sub-elements include <lazy_refcounts/> . This reduces the amount of metadata writes and flushes, and therefore improves initial write performance. This improvement is seen especially for writethrough cache modes, at the cost of having to repair the image after a crash, and allows delayed reference counter updates. It is recommended to use this feature with qcow2 (version 3), as it is faster when this is implemented. 23.19.3. Setting Backing Store Elements A single <backingStore> element is contained within the top level <volume> element. This tag is used to describe the optional copy-on-write backing store for the storage volume. It can contain the following child elements: <backingStore> <path>/var/lib/libvirt/images/master.img</path> <format type='raw'/> <permissions> <owner>107</owner> <group>107</group> <mode>0744</mode> <label>virt_image_t</label> </permissions> </backingStore> Figure 23.85. Backing store child elements Table 23.32. Backing store child elements Element Description <path> Provides the location at which the backing store can be accessed on the local filesystem, as an absolute path. If omitted, there is no backing store for this storage volume. <format> Provides information about the pool specific backing store format. For disk-based storage pools it will provide the partition type. For filesystem or directory-based storage pools it will provide the file format type (such as cow, qcow, vmdk, raw). The actual format is specified via the <type> attribute. Consult the pool-specific docs for the list of valid values. Most file formats require a backing store of the same format, however, the qcow2 format allows a different backing store format. <permissions> Provides information about the permissions of the backing file. It contains four child elements. The <owner> element contains the numeric user ID. The <group> element contains the numeric group ID. The <label> element contains the MAC (for example, SELinux) label string. | [
"<volume type='file'> <name>sparse.img</name> <key>/var/lib/libvirt/images/sparse.img</key> <allocation>0</allocation> <capacity unit=\"T\">1</capacity> </volume>",
"<target> <path>/var/lib/libvirt/images/sparse.img</path> <format type='qcow2'/> <permissions> <owner>107</owner> <group>107</group> <mode>0744</mode> <label>virt_image_t</label> </permissions> <compat>1.1</compat> <features> <lazy_refcounts/> </features> </target>",
"<backingStore> <path>/var/lib/libvirt/images/master.img</path> <format type='raw'/> <permissions> <owner>107</owner> <group>107</group> <mode>0744</mode> <label>virt_image_t</label> </permissions> </backingStore>"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-storage_volumes |
Chapter 78. JAXB | Chapter 78. JAXB JAXB is a Data Format which uses the JAXB2 XML marshalling standard which is included in Java 6 to unmarshal an XML payload into Java objects or to marshal Java objects into an XML payload. 78.1. Options The JAXB dataformat supports 19 options, which are listed below. Name Default Java Type Description contextPath String Required Package name where your JAXB classes are located. contextPathIsClassName Boolean This can be set to true to mark that the contextPath is referring to a classname and not a package name. schema String To validate against an existing schema. Your can use the prefix classpath:, file: or http: to specify how the resource should by resolved. You can separate multiple schema files by using the ',' character. schemaSeverityLevel Enum Sets the schema severity level to use when validating against a schema. This level determines the minimum severity error that triggers JAXB to stop continue parsing. The default value of 0 (warning) means that any error (warning, error or fatal error) will trigger JAXB to stop. There are the following three levels: 0=warning, 1=error, 2=fatal error. Enum values: 0 1 2 prettyPrint Boolean To enable pretty printing output nicely formatted. Is by default false. objectFactory Boolean Whether to allow using ObjectFactory classes to create the POJO classes during marshalling. This only applies to POJO classes that has not been annotated with JAXB and providing jaxb.index descriptor files. ignoreJAXBElement Boolean Whether to ignore JAXBElement elements - only needed to be set to false in very special use-cases. mustBeJAXBElement Boolean Whether marhsalling must be java objects with JAXB annotations. And if not then it fails. This option can be set to false to relax that, such as when the data is already in XML format. filterNonXmlChars Boolean To ignore non xml characheters and replace them with an empty space. encoding String To overrule and use a specific encoding. fragment Boolean To turn on marshalling XML fragment trees. By default JAXB looks for XmlRootElement annotation on given class to operate on whole XML tree. This is useful but not always - sometimes generated code does not have XmlRootElement annotation, sometimes you need unmarshall only part of tree. In that case you can use partial unmarshalling. To enable this behaviours you need set property partClass. Camel will pass this class to JAXB's unmarshaler. partClass String Name of class used for fragment parsing. See more details at the fragment option. partNamespace String XML namespace to use for fragment parsing. See more details at the fragment option. namespacePrefixRef String When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. xmlStreamWriterWrapper String To use a custom xml stream writer. schemaLocation String To define the location of the schema. noNamespaceSchemaLocation String To define the location of the namespaceless schema. jaxbProviderProperties String Refers to a custom java.util.Map to lookup in the registry containing custom JAXB provider properties to be used with the JAXB marshaller. contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. 78.2. Using the Java DSL For example the following uses a named DataFormat of jaxb which is configured with a number of Java package names to initialize the JAXBContext . DataFormat jaxb = new JaxbDataFormat("com.acme.model"); from("activemq:My.Queue"). unmarshal(jaxb). to("mqseries:Another.Queue"); You can if you prefer use a named reference to a data format which can then be defined in your Registry such as via your Spring XML file. e.g. from("activemq:My.Queue"). unmarshal("myJaxbDataType"). to("mqseries:Another.Queue"); 78.3. Using Spring XML The following example shows how to configure the JaxbDataFormat and use it in multiple routes. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <bean id="myJaxb" class="org.apache.camel.converter.jaxb.JaxbDataFormat"> <property name="contextPath" value="org.apache.camel.example"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <marshal><custom ref="myJaxb"/></marshal> <to uri="direct:marshalled"/> </route> <route> <from uri="direct:marshalled"/> <unmarshal><custom ref="myJaxb"/></unmarshal> <to uri="mock:result"/> </route> </camelContext> </beans> Multiple context paths It is possible to use this data format with more than one context path. You can specify context path using : as separator, for example com.mycompany:com.mycompany2 . Note that this is handled by JAXB implementation and might change if you use different vendor than RI. 78.4. Partial marshalling/unmarshalling JAXB 2 supports marshalling and unmarshalling XML tree fragments. By default JAXB looks for @XmlRootElement annotation on given class to operate on whole XML tree. This is useful but not always - sometimes generated code does not have @XmlRootElement annotation, sometimes you need unmarshall only part of tree. In that case you can use partial unmarshalling. To enable this behaviours you need set property partClass . Camel will pass this class to JAXB's unmarshaler. If JaxbConstants.JAXB_PART_CLASS is set as one of headers, (even if partClass property is set on DataFormat), the property on DataFormat is surpassed and the one set in the headers is used. For marshalling you have to add partNamespace attribute with QName of destination namespace. Example of Spring DSL you can find above. If JaxbConstants.JAXB_PART_NAMESPACE is set as one of headers, (even if partNamespace property is set on DataFormat), the property on DataFormat is surpassed and the one set in the headers is used. While setting partNamespace through JaxbConstants.JAXB_PART_NAMESPACE , please note that you need to specify its value \{[namespaceUri]}[localPart] ... .setHeader(JaxbConstants.JAXB_PART_NAMESPACE, simple("{http://www.camel.apache.org/jaxb/example/address/1}address")); ... 78.5. Fragment JaxbDataFormat has new property fragment which can set the the Marshaller.JAXB_FRAGMENT encoding property on the JAXB Marshaller. If you don't want the JAXB Marshaller to generate the XML declaration, you can set this option to be true. The default value of this property is false. 78.6. Ignoring the NonXML Character JaxbDataFormat supports to ignore the NonXML Character , you just need to set the filterNonXmlChars property to be true, JaxbDataFormat will replace the NonXML character with " " when it is marshaling or unmarshaling the message. You can also do it by setting the Exchange property Exchange.FILTER_NON_XML_CHARS . JDK 1.5 JDK 1.6+ Filtering in use StAX API and implementation No Filtering not in use StAX API only No This feature has been tested with Woodstox 3.2.9 and Sun JDK 1.6 StAX implementation. JaxbDataFormat now allows you to customize the XMLStreamWriter used to marshal the stream to XML. Using this configuration, you can add your own stream writer to completely remove, escape, or replace non-xml characters. JaxbDataFormat customWriterFormat = new JaxbDataFormat("org.apache.camel.foo.bar"); customWriterFormat.setXmlStreamWriterWrapper(new TestXmlStreamWriter()); The following example shows using the Spring DSL and also enabling Camel's NonXML filtering: <bean id="testXmlStreamWriterWrapper" class="org.apache.camel.jaxb.TestXmlStreamWriter"/> <jaxb filterNonXmlChars="true" contextPath="org.apache.camel.foo.bar" xmlStreamWriterWrapper="#testXmlStreamWriterWrapper" /> 78.7. Working with the ObjectFactory If you use XJC to create the java class from the schema, you will get an ObjectFactory for you JAXB context. Since the ObjectFactory uses JAXBElement to hold the reference of the schema and element instance value, jaxbDataformat will ignore the JAXBElement by default and you will get the element instance value instead of the JAXBElement object form the unmarshaled message body. If you want to get the JAXBElement object form the unmarshaled message body, you need to set the JaxbDataFormat object's ignoreJAXBElement property to be false. 78.8. Setting encoding You can set the encoding option to use when marshalling. Its the Marshaller.JAXB_ENCODING encoding property on the JAXB Marshaller. You can setup which encoding to use when you declare the JAXB data format. You can also provide the encoding in the Exchange property Exchange.CHARSET_NAME . This property will overrule the encoding set on the JAXB data format. In this Spring DSL we have defined to use iso-8859-1 as the encoding. 78.9. Controlling namespace prefix mapping When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. Notice this requires having JAXB-RI 2.1 or better (from SUN) on the classpath, as the mapping functionality is dependent on the implementation of JAXB, whether its supported. For example in Spring XML we can define a Map with the mapping. In the mapping file below, we map SOAP to use soap as prefix. While our custom namespace "http://www.mycompany.com/foo/2" is not using any prefix. <util:map id="myMap"> <entry key="http://www.w3.org/2003/05/soap-envelope" value="soap"/> <!-- we dont want any prefix for our namespace --> <entry key="http://www.mycompany.com/foo/2" value=""/> </util:map> To use this in JAXB or SOAP you refer to this map, using the namespacePrefixRef attribute as shown below. Then Camel will lookup in the Registry a java.util.Map with the id "myMap", which was what we defined above. <marshal> <soapjaxb version="1.2" contextPath="com.mycompany.foo" namespacePrefixRef="myMap"/> </marshal> 78.10. Schema validation The JAXB Data Format supports validation by marshalling and unmarshalling from/to XML. Your can use the prefix classpath: , file: or http: to specify how the resource should by resolved. You can separate multiple schema files by using the ',' character. Using the Java DSL, you can configure it in the following way: JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchema("classpath:person.xsd,classpath:address.xsd"); You can do the same using the XML DSL: <marshal> <jaxb id="jaxb" schema="classpath:person.xsd,classpath:address.xsd"/> </marshal> Camel will create and pool the underling SchemaFactory instances on the fly, because the SchemaFactory shipped with the JDK is not thread safe. However, if you have a SchemaFactory implementation which is thread safe, you can configure the JAXB data format to use this one: JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setSchemaFactory(thradSafeSchemaFactory); 78.11. Schema Location The JAXB Data Format supports to specify the SchemaLocation when marshaling the XML. Using the Java DSL, you can configure it in the following way: JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchemaLocation("schema/person.xsd"); You can do the same using the XML DSL: <marshal> <jaxb id="jaxb" schemaLocation="schema/person.xsd"/> </marshal> 78.12. Marshal data that is already XML The JAXB marshaller requires that the message body is JAXB compatible, eg its a JAXBElement, eg a java instance that has JAXB annotations, or extend JAXBElement. There can be situations where the message body is already in XML, eg from a String type. There is a new option mustBeJAXBElement you can set to false, to relax this check, so the JAXB marshaller only attempts to marshal JAXBElements (javax.xml.bind.JAXBIntrospector#isElement returns true). And in those situations the marshaller fallbacks to marshal the message body as-is. 78.13. Dependencies To use JAXB in your camel routes you need to add the a dependency on camel-jaxb which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jaxb</artifactId> <version>{CamelSBVersion}</version> </dependency> 78.14. Spring Boot Auto-Configuration When using jaxb with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jaxb-starter</artifactId> </dependency> The component supports 20 options, which are listed below. Name Description Default Type camel.dataformat.jaxb.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.jaxb.context-path Package name where your JAXB classes are located. String camel.dataformat.jaxb.context-path-is-class-name This can be set to true to mark that the contextPath is referring to a classname and not a package name. false Boolean camel.dataformat.jaxb.enabled Whether to enable auto configuration of the jaxb data format. This is enabled by default. Boolean camel.dataformat.jaxb.encoding To overrule and use a specific encoding. String camel.dataformat.jaxb.filter-non-xml-chars To ignore non xml characheters and replace them with an empty space. false Boolean camel.dataformat.jaxb.fragment To turn on marshalling XML fragment trees. By default JAXB looks for XmlRootElement annotation on given class to operate on whole XML tree. This is useful but not always - sometimes generated code does not have XmlRootElement annotation, sometimes you need unmarshall only part of tree. In that case you can use partial unmarshalling. To enable this behaviours you need set property partClass. Camel will pass this class to JAXB's unmarshaler. false Boolean camel.dataformat.jaxb.ignore-j-a-x-b-element Whether to ignore JAXBElement elements - only needed to be set to false in very special use-cases. false Boolean camel.dataformat.jaxb.jaxb-provider-properties Refers to a custom java.util.Map to lookup in the registry containing custom JAXB provider properties to be used with the JAXB marshaller. String camel.dataformat.jaxb.must-be-j-a-x-b-element Whether marhsalling must be java objects with JAXB annotations. And if not then it fails. This option can be set to false to relax that, such as when the data is already in XML format. false Boolean camel.dataformat.jaxb.namespace-prefix-ref When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. String camel.dataformat.jaxb.no-namespace-schema-location To define the location of the namespaceless schema. String camel.dataformat.jaxb.object-factory Whether to allow using ObjectFactory classes to create the POJO classes during marshalling. This only applies to POJO classes that has not been annotated with JAXB and providing jaxb.index descriptor files. false Boolean camel.dataformat.jaxb.part-class Name of class used for fragment parsing. See more details at the fragment option. String camel.dataformat.jaxb.part-namespace XML namespace to use for fragment parsing. See more details at the fragment option. String camel.dataformat.jaxb.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.jaxb.schema To validate against an existing schema. Your can use the prefix classpath:, file: or http: to specify how the resource should by resolved. You can separate multiple schema files by using the ',' character. String camel.dataformat.jaxb.schema-location To define the location of the schema. String camel.dataformat.jaxb.schema-severity-level Sets the schema severity level to use when validating against a schema. This level determines the minimum severity error that triggers JAXB to stop continue parsing. The default value of 0 (warning) means that any error (warning, error or fatal error) will trigger JAXB to stop. There are the following three levels: 0=warning, 1=error, 2=fatal error. 0 Integer camel.dataformat.jaxb.xml-stream-writer-wrapper To use a custom xml stream writer. String | [
"DataFormat jaxb = new JaxbDataFormat(\"com.acme.model\"); from(\"activemq:My.Queue\"). unmarshal(jaxb). to(\"mqseries:Another.Queue\");",
"from(\"activemq:My.Queue\"). unmarshal(\"myJaxbDataType\"). to(\"mqseries:Another.Queue\");",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <bean id=\"myJaxb\" class=\"org.apache.camel.converter.jaxb.JaxbDataFormat\"> <property name=\"contextPath\" value=\"org.apache.camel.example\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <marshal><custom ref=\"myJaxb\"/></marshal> <to uri=\"direct:marshalled\"/> </route> <route> <from uri=\"direct:marshalled\"/> <unmarshal><custom ref=\"myJaxb\"/></unmarshal> <to uri=\"mock:result\"/> </route> </camelContext> </beans>",
".setHeader(JaxbConstants.JAXB_PART_NAMESPACE, simple(\"{http://www.camel.apache.org/jaxb/example/address/1}address\"));",
"JaxbDataFormat customWriterFormat = new JaxbDataFormat(\"org.apache.camel.foo.bar\"); customWriterFormat.setXmlStreamWriterWrapper(new TestXmlStreamWriter());",
"<bean id=\"testXmlStreamWriterWrapper\" class=\"org.apache.camel.jaxb.TestXmlStreamWriter\"/> <jaxb filterNonXmlChars=\"true\" contextPath=\"org.apache.camel.foo.bar\" xmlStreamWriterWrapper=\"#testXmlStreamWriterWrapper\" />",
"<util:map id=\"myMap\"> <entry key=\"http://www.w3.org/2003/05/soap-envelope\" value=\"soap\"/> <!-- we dont want any prefix for our namespace --> <entry key=\"http://www.mycompany.com/foo/2\" value=\"\"/> </util:map>",
"<marshal> <soapjaxb version=\"1.2\" contextPath=\"com.mycompany.foo\" namespacePrefixRef=\"myMap\"/> </marshal>",
"JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchema(\"classpath:person.xsd,classpath:address.xsd\");",
"<marshal> <jaxb id=\"jaxb\" schema=\"classpath:person.xsd,classpath:address.xsd\"/> </marshal>",
"JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setSchemaFactory(thradSafeSchemaFactory);",
"JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchemaLocation(\"schema/person.xsd\");",
"<marshal> <jaxb id=\"jaxb\" schemaLocation=\"schema/person.xsd\"/> </marshal>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jaxb</artifactId> <version>{CamelSBVersion}</version> </dependency>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jaxb-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-jaxb-dataformat-starter |
Chapter 114. Using external identity providers to authenticate to IdM | Chapter 114. Using external identity providers to authenticate to IdM You can associate users with external identity providers (IdP) that support the OAuth 2 device authorization flow. When these users authenticate with the SSSD version available in RHEL 8.7 or later, they receive RHEL Identity Management (IdM) single sign-on capabilities with Kerberos tickets after performing authentication and authorization at the external IdP. Notable features include: Adding, modifying, and deleting references to external IdPs with ipa idp-* commands. Enabling IdP authentication for users with the ipa user-mod --user-auth-type=idp command. 114.1. The benefits of connecting IdM to an external IdP As an administrator, you might want to allow users stored in an external identity source, such as a cloud services provider, to access RHEL systems joined to your Identity Management (IdM) environment. To achieve this, you can delegate the authentication and authorization process of issuing Kerberos tickets for these users to that external entity. You can use this feature to expand IdM's capabilities and allow users stored in external identity providers (IdPs) to access Linux systems managed by IdM. 114.2. How IdM incorporates logins via external IdPs SSSD 2.7.0 contains the sssd-idp package, which implements the idp Kerberos pre-authentication method. This authentication method follows the OAuth 2.0 Device Authorization Grant flow to delegate authorization decisions to external IdPs: An IdM client user initiates OAuth 2.0 Device Authorization Grant flow, for example, by attempting to retrieve a Kerberos TGT with the kinit utility at the command line. A special code and website link are sent from the Authorization Server to the IdM KDC backend. The IdM client displays the link and the code to the user. In this example, the IdM client outputs the link and code on the command line. The user opens the website link in a browser, which can be on another host, a mobile phone, and so on: The user enters the special code. If necessary, the user logs in to the OAuth 2.0-based IdP. The user is prompted to authorize the client to access information. The user confirms access at the original device prompt. In this example, the user hits the Enter key at the command line. The IdM KDC backend polls the OAuth 2.0 Authorization Server for access to user information. What is supported: Logging in remotely via SSH with the keyboard-interactive authentication method enabled, which allows calling Pluggable Authentication Module (PAM) libraries. Logging in locally with the console via the logind service. Retrieving a Kerberos ticket-granting ticket (TGT) with the kinit utility. What is currently not supported: Logging in to the IdM WebUI directly. To log in to the IdM WebUI, you must first acquire a Kerberos ticket. Logging in to Cockpit WebUI directly. To log in to the Cockpit WebUI, you must first acquire a Kerberos ticket. Additional resources Authentication against external Identity Providers RFC 8628: OAuth 2.0 Device Authorization Grant 114.3. Creating a reference to an external identity provider To connect external identity providers (IdPs) to your Identity Management (IdM) environment, create IdP references in IdM. Complete this procedure to create a reference called my-keycloak-idp to an IdP based on the Keycloak template. For more reference templates, see Example references to different external IdPs in IdM . Prerequisites You have registered IdM as an OAuth application to your external IdP, and obtained a client ID. You can authenticate as the IdM admin account. Your IdM servers are using RHEL 8.7 or later. Your IdM servers are using SSSD 2.7.0 or later. Procedure Authenticate as the IdM admin on an IdM server. Create a reference called my-keycloak-idp to an IdP based on the Keycloak template, where the --base-url option specifies the URL to the Keycloak server in the format server-name.USDDOMAIN:USDPORT/prefix . Verification Verify that the output of the ipa idp-show command shows the IdP reference you have created. Additional resources Example references to different external IdPs in IdM Options for the ipa idp-* commands to manage external identity providers in IdM The --provider option in the ipa idp-* commands ipa help idp-add 114.4. Example references to different external IdPs in IdM The following table lists examples of the ipa idp-add command for creating references to different IdPs in IdM. Identity Provider Important options Command example Microsoft Identity Platform, Azure AD --provider microsoft --organization Google --provider google GitHub --provider github Keycloak, Red Hat Single Sign-On --provider keycloak --organization --base-url Note The Quarkus version of Keycloak 17 and later have removed the /auth/ portion of the URI. If you use the non-Quarkus distribution of Keycloak in your deployment, include /auth/ in the --base-url option. Okta --provider okta Additional resources Creating a reference to an external identity provider Options for the ipa idp-* commands to manage external identity providers in IdM The --provider option in the ipa idp-* commands 114.5. Options for the ipa idp-* commands to manage external identity providers in IdM The following examples show how to configure references to external IdPs based on the different IdP templates. Use the following options to specify your settings: --provider the predefined template for one of the known identity providers --client-id the OAuth 2.0 client identifier issued by the IdP during application registration. As the application registration procedure is specific to each IdP, refer to their documentation for details. If the external IdP is Red Hat Single Sign-On (SSO), see Creating an OpenID Connect Client . --base-url base URL for IdP templates, required by Keycloak and Okta --organization Domain or Organization ID from the IdP, required by Microsoft Azure --secret (optional) Use this option if you have configured your external IdP to require a secret from confidential OAuth 2.0 clients. If you use this option when creating an IdP reference, you are prompted for the secret interactively. Protect the client secret as a password. Note SSSD in RHEL 8.7 only supports non-confidential OAuth 2.0 clients that do not use a client secret. If you want to use external IdPs that require a client secret from confidential clients, you must use SSSD in RHEL 8.8 and later. Additional resources Creating a reference to an external identity provider Example references to different external IdPs in IdM The --provider option in the ipa idp-* commands 114.6. Managing references to external IdPs After you have created a reference to an external identity provider (IdP), you can find, show, modify, and delete that reference. This example shows you how to manage a reference to an external IdP named keycloak-server1 . Prerequisites You can authenticate as the IdM admin account. Your IdM servers are using RHEL 8.7 or later. Your IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . Procedure Authenticate as the IdM admin on an IdM server. Manage the IdP reference. To find an IdP reference whose entry includes the string keycloak : To display an IdP reference named my-keycloak-idp : To modify an IdP reference, use the ipa idp-mod command. For example, to change the secret for an IdP reference named my-keycloak-idp , specify the --secret option to be prompted for the secret: To delete an IdP reference named my-keycloak-idp : 114.7. Enabling an IdM user to authenticate via an external IdP To enable an IdM user to authenticate via an external identity provider (IdP), associate the external IdP reference you have previously created with the user account. This example associates the external IdP reference keycloak-server1 with the user idm-user-with-external-idp . Prerequisites Your IdM client and IdM servers are using RHEL 8.7 or later. Your IdM client and IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . Procedure Modify the IdM user entry to associate an IdP reference with the user account: Verification Verify that the output of the ipa user-show command for that user displays references to the IdP: 114.8. Retrieving an IdM ticket-granting ticket as an external IdP user If you have delegated authentication for an Identity Management (IdM) user to an external identity provider (IdP), the IdM user can request a Kerberos ticket-granting ticket (TGT) by authenticating to the external IdP. Complete this procedure to: Retrieve and store an anonymous Kerberos ticket locally. Request the TGT for the idm-user-with-external-idp user by using kinit with the -T option to enable Flexible Authentication via Secure Tunneling (FAST) channel to provide a secure connection between the Kerberos client and Kerberos Distribution Center (KDC). Prerequisites Your IdM client and IdM servers use RHEL 8.7 or later. Your IdM client and IdM servers use SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . You have associated an external IdP reference with the user account. See Enabling an IdM user to authenticate via an external IdP . The user that you are initially logged in as has write permissions on a directory in the local filesystem. Procedure Use Anonymous PKINIT to obtain a Kerberos ticket and store it in a file named ./fast.ccache . Optional: View the retrieved ticket: Begin authenticating as the IdM user, using the -T option to enable the FAST communication channel. In a browser, authenticate as the user at the website provided in the command output. At the command line, press the Enter key to finish the authentication process. Verification Display your Kerberos ticket information and confirm that the line config: pa_type shows 152 for pre-authentication with an external IdP. The pa_type = 152 indicates external IdP authentication. 114.9. Logging in to an IdM client via SSH as an external IdP user To log in to an IdM client via SSH as an external identity provider (IdP) user, begin the login process on the command linel. When prompted, perform the authentication process at the website associated with the IdP, and finish the process at the Identity Management (IdM) client. Prerequisites Your IdM client and IdM servers are using RHEL 8.7 or later. Your IdM client and IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . You have associated an external IdP reference with the user account. See Enabling an IdM user to authenticate via an external IdP . Procedure Attempt to log in to the IdM client via SSH. In a browser, authenticate as the user at the website provided in the command output. At the command line, press the Enter key to finish the authentication process. Verification Display your Kerberos ticket information and confirm that the line config: pa_type shows 152 for pre-authentication with an external IdP. 114.10. The --provider option in the ipa idp-* commands The following identity providers (IdPs) support OAuth 2.0 device authorization grant flow: Microsoft Identity Platform, including Azure AD Google GitHub Keycloak, including Red Hat Single Sign-On (SSO) Okta When using the ipa idp-add command to create a reference to one of these external IdPs, you can specify the IdP type with the --provider option, which expands into additional options as described below: --provider=microsoft Microsoft Azure IdPs allow parametrization based on the Azure tenant ID, which you can specify with the --organization option to the ipa idp-add command. If you need support for the live.com IdP, specify the option --organization common . Choosing --provider=microsoft expands to use the following options. The value of the --organization option replaces the string USD{ipaidporg} in the table. Option Value --auth-uri=URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/authorize --dev-auth-uri=URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/devicecode --token-uri=URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/token --userinfo-uri=URI https://graph.microsoft.com/oidc/userinfo --keys-uri=URI https://login.microsoftonline.com/common/discovery/v2.0/keys --scope=STR openid email --idp-user-id=STR email --provider=google Choosing --provider=google expands to use the following options: Option Value --auth-uri=URI https://accounts.google.com/o/oauth2/auth --dev-auth-uri=URI https://oauth2.googleapis.com/device/code --token-uri=URI https://oauth2.googleapis.com/token --userinfo-uri=URI https://openidconnect.googleapis.com/v1/userinfo --keys-uri=URI https://www.googleapis.com/oauth2/v3/certs --scope=STR openid email --idp-user-id=STR email --provider=github Choosing --provider=github expands to use the following options: Option Value --auth-uri=URI https://github.com/login/oauth/authorize --dev-auth-uri=URI https://github.com/login/device/code --token-uri=URI https://github.com/login/oauth/access_token --userinfo-uri=URI https://openidconnect.googleapis.com/v1/userinfo --keys-uri=URI https://api.github.com/user --scope=STR user --idp-user-id=STR login --provider=keycloak With Keycloak, you can define multiple realms or organizations. Since it is often a part of a custom deployment, both base URL and realm ID are required, and you can specify them with the --base-url and --organization options to the ipa idp-add command: Choosing --provider=keycloak expands to use the following options. The value you specify in the --base-url option replaces the string USD{ipaidpbaseurl} in the table, and the value you specify for the --organization `option replaces the string `USD{ipaidporg} . Option Value --auth-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/auth --dev-auth-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/auth/device --token-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/token --userinfo-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/userinfo --scope=STR openid email --idp-user-id=STR email --provider=okta After registering a new organization in Okta, a new base URL is associated with it. You can specify this base URL with the --base-url option to the ipa idp-add command: Choosing --provider=okta expands to use the following options. The value you specify for the --base-url option replaces the string USD{ipaidpbaseurl} in the table. Option Value --auth-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/authorize --dev-auth-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/device/authorize --token-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/token --userinfo-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/userinfo --scope=STR openid email --idp-user-id=STR email Additional resources Pre-populated IdP templates | [
"kinit admin",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id id13778 ------------------------------------------------ Added Identity Provider reference \"my-keycloak-idp\" ------------------------------------------------ Identity Provider reference name: my-keycloak-idp Authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth Device authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth/device Token URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/token User info URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/userinfo Client identifier: ipa_oidc_client Scope: openid email External IdP user identifier attribute: email",
"ipa idp-show my-keycloak-idp",
"ipa idp-add my-azure-idp --provider microsoft --organization main --client-id <azure_client_id>",
"ipa idp-add my-google-idp --provider google --client-id <google_client_id>",
"ipa idp-add my-github-idp --provider github --client-id <github_client_id>",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id <keycloak_client_id>",
"ipa idp-add my-okta-idp --provider okta --base-url dev-12345.okta.com --client-id <okta_client_id>",
"kinit admin",
"ipa idp-find keycloak",
"ipa idp-show my-keycloak-idp",
"ipa idp-mod my-keycloak-idp --secret",
"ipa idp-del my-keycloak-idp",
"ipa user-mod idm-user-with-external-idp --idp my-keycloak-idp --idp-user-id [email protected] --user-auth-type=idp --------------------------------- Modified user \"idm-user-with-external-idp\" --------------------------------- User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"ipa user-show idm-user-with-external-idp User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] ID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"kinit -n -c ./fast.ccache",
"klist -c fast.ccache Ticket cache: FILE:fast.ccache Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS Valid starting Expires Service principal 03/03/2024 13:36:37 03/04/2024 13:14:28 krbtgt/[email protected]",
"kinit -T ./fast.ccache idm-user-with-external-idp Authenticate at https://oauth2.idp.com:8443/auth/realms/master/device?user_code=YHMQ-XKTL and press ENTER.:",
"klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"[user@client ~]USD ssh [email protected] ([email protected]) Authenticate at https://oauth2.idp.com:8443/auth/realms/main/device?user_code=XYFL-ROYR and press ENTER.",
"[idm-user-with-external-idp@client ~]USD klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"ipa idp-add MySSO --provider keycloak --org main --base-url keycloak.domain.com:8443/auth --client-id <your-client-id>",
"ipa idp-add MyOkta --provider okta --base-url dev-12345.okta.com --client-id <your-client-id>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/assembly_using-external-identity-providers-to-authenticate-to-idm_configuring-and-managing-idm |
Tuning performance of Red Hat Satellite | Tuning performance of Red Hat Satellite Red Hat Satellite 6.16 Optimize performance for Satellite Server and Capsule Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/tuning_performance_of_red_hat_satellite/index |
Chapter 6. Prerequisites for installation | Chapter 6. Prerequisites for installation The Red Hat Certificate System installation process requires some preparation of the environment. This chapter describes the requirements, dependencies, and other prerequisites for installing Certificate System in a Common Criteria environment. 6.1. Installing and subscribing the RHEL machines Red Hat Certificate System requires Red Hat Enterprise Linux 8.6. Prerequisites You have an installation image of the latest build of RHEL 8.6 x86_64 . Procedure On both machines, install RHEL 8 with all z-stream updates. Both BaseOS and AppStream repositories must be enabled (by default, those repositories are part of the full installation image that is available on the Red Hat Customer Portal and already enabled). For example, to verify that the BaseOS and Appstream repos are enabled for RHEL x86_64: Both machines should be registered and subscribed with a valid RHEL subscription. For example: NOTE Check if Simple Content Access (SCA) mode is enabled on your account using: After registering with subscription-manager, if experiencing any issues installing packages from the enabled BaseOS and AppStream repositories, disable SCA in the subscription management page in the Access Portal. If you are not the administrator of your account, you will need to request the administrator to do so. Attach the pool ID containing your Red Hat Enterprise Linux subscription using the same method as outlined in the below step, and then try again to install the packages. Attach the Red Hat subscriptions to the system. If your system already has a subscription attached that provides Certificate System, or if Simple Content Access (SCA) is set to the default setting enabled, skip to step 3. List the available subscriptions and note the pool ID providing Red Hat Certificate System. For example: Depending on the number of subscriptions you have, the output can be very long. In this case, you can redirect it to a file: Attach the Certificate System subscription to the system using the pool ID from the step: "Pin" the RHEL version to 8.6 by using the subscription-manager release --set command. For example: Verification: In addition, on rhcs10.example.com , install the environment group Server with GUI : Additional resources For more information on installing RHEL 8, see the RHEL 8 documentation . 6.2. Enabling the repositories Before you can install and update Red Hat Certificate System, you must enable the corresponding repositories for Certificate System and Directory Server. Prerequisites You have installed and subscribed both machines (one for Certificate System and one for Directory Server). See Section 6.1, "Installing and subscribing the RHEL machines" . Enabling online repositories: If you are installing Red Hat Certificate System with online repositories, follow the below steps on the Certificate System and on the Directory Server machine: Enable the Certificate System repository on rhcs10.example.com : Where x denotes the latest Certificate System version. For example, to enable the Certificate System repository for Red Hat Certificate System 10.4, please use the below command: Enable the Directory Server repository on rhds11.example.com : Note For compliance, only enable Red Hat approved repositories. You can only enable repositories approved by Red Hat through the subscription-manager utility. ISO repositories If you are installing RHCS with ISO repositories, follow the below steps: On rhcs10.example.com : Create a repo file in /etc/yum.repos.d/ : Install the Apache web server, if it is not already installed on the system: Start the httpd service: Create a directory that will be used as the web root for hosting the ISO repository. For example: Mount the ISO to the directory. For example: On rhds11.example.com : Create a repo file in /etc/yum.repos.d/ : Install the Apache web server, if it is not already installed on the system: Start the httpd service: Create a directory that will be used as the web root for hosting the ISO repository. For example: Mount the ISO to the directory. For example: 6.3. Setting the FQDN Make sure the Fully Qualified Domain Name (FQDN) of each host matches how you wish them to be recognized. For example, run the following on both machines: If a hostname is not what you expect it to be, you can configure the FQDN using hostnamectl . For example, to update the CS machine's hostname: To update the DS machine's hostname: Additionally, add both the CS and DS machines' IP addresses and new hostnames as entries in /etc/hosts : Verify the FQDN again after the change: 6.4. Enabling FIPS on RHEL 8 FIPS mode must be enabled before you install the Certificate System. To check whether your system is in FIPS mode, run the following command: If the returned value is 1 , FIPS mode is enabled. The following procedure demonstrates how to enable the Federal Information Processing Standard (FIPS) mode on both rhcs10.example.com and rhds11.example.com . To switch to FIPS mode, use the fips-mode-setup --enable command. Restart your system to allow the kernel to switch to FIPS mode: Verify the current state of FIPS mode after the restart: Note If an existing directory server is running on a non-FIPS RHEL 8 system that has only just had its FIPS enabled, you will need to reset the Directory Manager password to allow the existing directory server to run properly. For more information, see Managing the Directory Manager Password in the Red Hat Directory Server Administration Guide. Additional resources For more information, you can refer to Switching the system to FIPS mode . 6.5. Setting up fapolicyd (for STIG environments) The fapolicyd software framework controls the execution of applications based on a user-defined policy. In a STIG environment, installing Certificate System will fail if fapolicyd is not set up properly. The following procedure describes how to add the rules needed to install and run RHCS instances. Important Do not follow this section unless you are certain that your system is in a STIG environment. In case you complete the below procedure unnecessarily, and later run into issues when running pkispawn , you will need to revert the changes before proceeding. Procedure To add the required fapolicyd rule: As root, create a file under /etc/fapolicyd/rules.d/ with a unique name. The prefix must contain a number in the 30s range for the priority, such as 35-allow-java.rules , or 39. Add the following rule: After saving the file, restart the fapolicyd service to recompile the rules: 6.6. Configuring a HSM To use a Hardware Security Module (HSM), a Federal Information Processing Standard (FIPS) 140-2 validated HSM is required. Red Hat Certificate System supports the nShield Connect XC hardware security module (HSM) and Thales Luna HSM by default (please see Section 4.4, "Supported Hardware Security Modules" for more information on Luna's limitations). Certificate System-supported HSMs are automatically added to the pkcs11.txt database with the modutil command during the pre-configuration stage of the installation, if the PKCS #11 library modules are in the specified installation paths. Configure rhcs10.example.com to be the HSM client machine. Important Please follow the instructions provided by your HSM vendor for your specific HSM brand / model / release. In our example, an nShield Connect XC unit is installed and configured with the latest software and firmware for compliance with FIPS 140-2 (Level 3). As of this writing the RFS software is SecWorld_Lin64-12.71.0, the firmware is nShield firmware 12.72.1 (FIPS certified), image 12.80.5. 6.6.1. FIPS mode on an HSM To use a Hardware Security Module (HSM), a Federal Information Processing Standard (FIPS) 140-2 validated HSM is required. Certain deployments require to setup their HSM to use FIPS mode. To enable FIPS Mode on HSMs, please refer to your HSM vendor's documentation. Important nShield Connect XC HSM On a nShield Connect XC HSM, the FIPS mode can only be enabled when generating the Security World, this cannot be changed afterwards. While there is a variety of ways to generate the Security World, the preferred method is always to use the new-world command. For guidance on how to generate a FIPS-compliant Security World, please follow the HSM vendor's documentation. Luna HSM Similarly, enabling the FIPS mode on a Luna HSM must be done during the initial configuration, since changing this policy zeroizes the HSM as a security measure. For details, please refer to the Luna HSM vendor's documentation. Please see Section 4.4, "Supported Hardware Security Modules" for more information on Luna's limitations. The below steps help you verify if FIPS mode is enabled for nShield Connect XC and Luna HSMs. For other HSMs, please refer to your HSM manufacturer's documentation. nShield Connect XC HSM To verify if the FIPS mode is enabled on an nShield HSM, enter: With older versions of the software, if the StrictFIPS140 is listed in the state flag, the FIPS mode is enabled. In newer versions, it is however better to check the new mode line and look for fips1402level3 . In all cases, there should also be an hkfips key present in the nfkminfo output. Luna HSM To verify if the FIPS mode is enabled on a Luna HSM: Open the lunash management console Use the hsm show command and verify that the output contains the text The HSM is in FIPS 140-2 approved operation mode. : Note Please refer to your HSM vendor's documentation for complete procedures. 6.6.2. Setting up SELinux for an HSM Certain HSMs require that you manually update SELinux settings before you can install Certificate System. The following describes nShield and Luna HSMs. For other HSMs, please refer to your HSM manufacturer's documentation. nShield Connect XC After you have installed the HSM and before you start installing Certificate System: Reset the context of files in the /opt/nfast/ directory: Restart the nfast software. Thales Luna HSM No SELinux-related actions are required before you start installing Certificate System. For details about supported HSMs and their limits, see Section 4.4, "Supported Hardware Security Modules" . 6.6.3. Preparing for installing Certificate System with an HSM In Chapter 7, Installing and configuring Red Hat Certificate System , you are instructed to use the following parameters in the configuration file you pass to the pkispawn utility when installing Certificate System with an HSM: The values of the pki_hsm_libfile and pki_token_name parameter depend on your specific HSM installation. These values allow the pkispawn utility to set up your HSM and enable Certificate System to connect to it. The value of the pki_token_password depends upon your particular HSM token's password. The password gives the pkispawn utility read and write permissions to create new keys on the HSM. The value of the pki_hsm_modulename is a name used in later pkispawn operations to identify the HSM. The string is an identifier you can set as whatever you like. It allows pkispawn and Certificate System to refer to the HSM and configuration information by name in later operations. The following section provides settings for individual HSMs. If your HSM is not listed, consult your HSM manufacturer's documentation. nShield HSM parameters For a nShield Connect XC, set the following parameters: Note that you can set the value of pki_hsm_modulename to any value. The above is a suggested value. To identify the token name, run the following command as the root user: The value of the name field in the Cardset section lists the token name. Set the token name as follows: SafeNet / Luna HSM parameters For a SafeNet / Luna HSM, such as a SafeNet Luna Network HSM, specify the following parameters: Note that you can set the value of pki_hsm_modulename to any value. The above is a suggested value. To identify the token name, run the following command as the root user: The value in the label column lists the token name. Set the token name as follows: Note Please see Section 4.4, "Supported Hardware Security Modules" for more information on Luna's limitations. 6.6.4. Testing the HSM connection To test the HSM connection: Create a temporary database: Add the PKCS #11 library module to the database: nShield Connect XC: Thales Luna: List the modules and note down the HSM name at " token: " for the step (e.g. NHSM-CONN-XC in the below example): Display the certificates for this token: 6.7. Verifying SELinux enforcement Security-Enhanced Linux (SELinux) is an implementation of a mandatory access control mechanism in the Linux kernel, checking for allowed operations after standard discretionary access controls are checked. SELinux can enforce rules on files and processes in a Linux system, and on their actions, based on defined policies. By default, RHEL 8 is installed with SELinux enabled. The SELinux policy must be set to Enforcing . To verify the current SELinux mode: Optional : If you need to set the policy to Enforcing : Additional resources For further details about SELinux, see the Using SELinux Guide . 6.8. Adding ports to the firewall and with SELinux context In our examples, Certificate System subsystems use the following ports. You might want to bookmark the following table for ease of reference to selected ports used by the example installations. Table 6.1. Ports Instance and services Ports (RSA) Ports (ECC) RootCA HTTP / HTTPS 8080 / 8443 20080 / 20443 CRL HTTP 8085 20085 LDAP 389 / 636 1389 / 1636 Tomcat 8009 / 8005 20009 / 20005 SubCA HTTP / HTTPS 31080 / 31443 21080 / 21443 CRL HTTP 31085 21085 LDAP 7389 / 7636 8389 / 8636 Tomcat 31009 / 31005 21009 / 21005 OCSP (RootCA) HTTP / HTTPS 33080 / 33443 34080 / 34443 LDAP 6389 / 6636 2389 / 2636 Tomcat 33009 / 33005 34009 / 34005 CRL publishing 12389 / 12636 13389 / 13636 OCSP (SubCA) HTTP / HTTPS 32080 / 32443 22080 / 22443 LDAP 11389 / 11636 9389 / 9636 Tomcat 32009 / 32005 22009 / 22005 CRL publishing 5389 / 5636 14389 / 14636 KRA HTTP / HTTPS 28080 / 28443 23080 / 23443 LDAP 22389 / 22636 4389 / 4636 Tomcat 28009 / 28005 23009 / 23005 TKS HTTP / HTTPS 24080 / 24443 N/A LDAP 16389 / 16636 N/A Tomcat Management 14009 / 14005 N/A TPS HTTP / HTTPS 25080 / 25443 N/A LDAP 17389 / 17636 N/A TPS Auth 9389 / 9636 N/A Tomcat Management 14019 / 14015 N/A Note When you set up Certificate System using the pkispawn utility, you can customize the port numbers. If you use different ports than the ones listed above, open them correspondingly in the firewall as described below. To enable communication between the clients and Certificate System, open the required ports in your firewall on the machine that will be hosting the corresponding service: Make sure the firewalld service is running. To start firewalld and configure it to start automatically when the system boots: Adding ports to the firewall Open the required ports using the firewall-cmd utility. For example, to open the default ports for the RootCA instance in the default firewall zone: Additionally, to open the default ports for the RootCA's LDAP instance: Verify that all ports that will be used are successfully added to the firewall: Reload the firewall configuration to ensure that the change takes place immediately: Adding ports with SELinux context If you want to add non-default ports, you will need to add them with SELinux context. If not, you will get an error like the following: Installation failed: port 33080 has invalid selinux context ephemeral_port_t . For CS instances, add SELinux context to all ports that will be used as type http_port_t . You can do this quickly using a FOR loop command with all ports you need to add. For example, to add the default RootCA ports: For DS ports, replace the port type option http_port_t with ldap_port_t . For example, for the RootCA's LDAP ports: Verify that all ports that will be used are successfully added with SELinux context: Additional resources For further details about ports, see Section 5.5.3, "Planning ports" . 6.9. Installing RHCS and RHDS packages This section describes the installation of Red Hat Directory Server (RHDS) and Red Hat Certificate System (RHCS) packages and their initial configuration. When installing the Certificate System packages you can either install them for each subsystem individually or all at once. The following subsystem packages and components are available in Red Hat Certificate System: pki-ca : Provides the Certificate Authority (CA) subsystem. pki-kra : Provides the Key Recovery Authority (KRA) subsystem. pki-ocsp : Provides the Online Certificate Status Protocol (OCSP) responder. pki-tks : Provides the Token Key Service (TKS). pki-tps : Provides the Token Processing Service (TPS). pki-server and redhat-pki-server-theme : Provides the web-based Certificate System interface. Both packages must be installed. This is installed as a dependency if you install one of the following packages: pki-ca , pki-kra , pki-ocsp , pki-tks , pki-tps . pki-console and redhat-pki-console-theme : Provides the Java-based Red Hat PKI console. Both packages must be installed. pki-acme provides Automatic Certificate Management Environment (ACME). pki-est is available as Technology Preview, providing Enrollment over Secure Transport (EST). Note Technology Preview features provide early access to upcoming product functionality, and are not yet fully supported under subscription agreements. Important ACME (Automatic Certificate Management Environment) and (EST) Enrollment over Secure Transport are not evaluated and must not be used in the Common Criteria configuration. With the redhat-pki module, you can install all Certificate System subsystem packages and components at once on a RHEL 8 system. The redhat-pki module installs the five subsystems of Red Hat Certificate System: in addition to the pki-core module (CA, KRA) which is part of Red Hat Identity Management (IdM), includes the RHCS-specific subsystems (OCSP, TKS and TPS) as well as the pki-deps module that takes care of the required dependencies. Prerequisites You have enabled the corresponding repositories, as described in Section 6.2, "Enabling the repositories" . Install the packages Install the Red Hat Certificate System (RHCS) subsystem packages as follows: On rhcs10.example.com , enable the RHCS module and install the RHCS 10.4 packages: This installs the following packages: In addition, on rhds11.example.com , install the RHDS module to install all the Red Hat Directory Server 11.5 packages: Create directories for storing pki files On rhcs10.example.com : On rhds11.example.com : Verifying Certificate System product version The Red Hat Certificate System product version is stored in the /usr/share/pki/CS_SERVER_VERSION file. To display the version: To display the PKI version: Note Future updates will have newer version numbers (that is, 10.4.x). Note Once you have a server installed and running, you could find the product version for each instance by accessing the URLs as instructed in Section 7.13.16, "Determining the product version" . Updating Certificate System packages To update Certificate System and operating system packages, use the dnf update command. For example: This updates the whole system including the RHCS packages. You can verify the version number before and after updating packages, to confirm they were successfully installed. Important Updating Certificate System requires the PKI infrastructure to be restarted. We suggest scheduling a maintenance window during which you can take the PKI infrastructure offline to install the update. To optionally download updates without installing, use the --downloadonly option in the above procedure: The downloaded packages are stored in the /var/cache/yum/ directory. The dnf update will later use the packages if they are the latest versions. | [
"subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms",
"subscription-manager repos --enable rhel-8-for-x86_64-appstream-rpms",
"subscription-manager register --username= <customer access portal username>",
"subscription-manager status",
"subscription-manager list --available --all Subscription Name: Red Hat Enterprise Linux Developer Suite Provides: Red Hat Certificate System Pool ID: 7aba89677a6a38fc0bba7dac673f7993 Available: 1",
"subscription-manager list --available --all > /root/subscriptions.txt",
"subscription-manager attach --pool=7aba89677a6a38fc0bba7dac673f7993 Successfully attached a subscription for: Red Hat Enterprise Linux Developer Suite",
"subscription-manager release --list",
"subscription-manager release --set 8.6",
"subscription-manager release --show",
"dnf groupinstall \"Server with GUI\"",
"subscription-manager repos --enable certsys-10.x-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable certsys-10.4-for-rhel-8-x86_64-rpms Repository 'certsys-10.4-for-rhel-8-x86_64-rpms' is enabled for this system.",
"subscription-manager repos --enable=dirsrv-11-for-rhel-8-x86_64-rpms Repository 'dirsrv-11-for-rhel-8-x86_64-rpms' is enabled for this system.",
"vi /etc/yum.repos.d/redhat.repo",
"[rhcs10] name=rhcs10 baseurl=http://rhcs10.example.com/rhcs10/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release skip_if_unavailable=1",
"dnf install httpd",
"service httpd start",
"mkdir -p /var/www/html/rhcs10",
"mount -o loop XXXXXXX-CertificateSystem-x86_64-dvd1.iso /var/www/html/rhcs10",
"vi /etc/yum.repos.d/redhat.repo",
"[rhds11] name=rhds11 baseurl=http://rhds11.example.com/rhds11/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release skip_if_unavailable=1",
"dnf install httpd",
"service httpd start",
"mkdir -p /var/www/html/rhds11",
"mount -o loop XXXXXXX-DirectoryServer-x86_64-dvd1.iso /var/www/html/rhds11",
"hostname",
"hostnamectl set-hostname rhcs10.example.com",
"hostnamectl set-hostname rhds11.example.com",
"vi /etc/hosts",
"127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.1.111.111 rhcs10.example.com 10.2.222.222 rhds11.example.com",
"hostname",
"sysctl crypto.fips_enabled",
"fips-mode-setup --enable Kernel initramdisks are being regenerated. This might take some time. Setting system policy to FIPS Note: System-wide crypto policies are applied on application start-up. It is recommended to restart the system for the change of policies to fully take place. FIPS mode will be enabled. Please reboot the system for the setting to take effect.",
"reboot",
"fips-mode-setup --check FIPS mode is enabled.",
"vi /etc/fapolicyd/35-allow-java.rules",
"allow perm=open dir=/usr/lib/jvm/ : dir=/usr/share/tomcat/bin/ ftype=application/java-archive",
"systemctl restart fapolicyd.service",
"/opt/nfast/bin/nfkminfo",
"lunash:> hsm show FIPS 140-2 Operation: ===================== The HSM is in FIPS 140-2 approved operation mode.",
"restorecon -R /opt/nfast/",
"/opt/nfast/sbin/init.d-ncipher restart",
"######################### Provide HSM parameters # ########################## pki_hsm_enable=True pki_hsm_libfile=hsm_libfile pki_hsm_modulename=hsm_modulename pki_token_name=hsm_token_name pki_token_password=pki_token_password ######################################## Provide PKI-specific HSM token names # ######################################## pki_audit_signing_token=hsm_token_name pki_ssl_server_token=hsm_token_name pki_subsystem_token=hsm_token_name",
"pki_hsm_libfile=/opt/nfast/toolkits/pkcs11/libcknfast.so pki_hsm_modulename=nfast",
"/opt/nfast/bin/nfkminfo Module #1 Slot #0 IC 1 generation 1 phystype SmartCard slotlistflags 0x2 SupportsAuthentication state 0x5 Operator flags 0x10000 shareno 1 (`CONNXC-1') shares LTU(PIN) LTFIPS error OK Cardset name \"NHSM-CONN-XC\" k-out-of-n 1/2 flags Persistent PINRecoveryForbidden(disabled) !RemoteEnabled timeout none card names \"CONNXC-1\" \"CONNXC-2\" hkltu xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx gentime 2021-11-17 21:19:47 Module #1 Slot #1 IC 0 generation 1 phystype SoftToken slotlistflags 0x0 state 0x2 Empty flags 0x0 shareno 0 shares error OK No Cardset No Pre-Loaded Objects",
"pki_token_name=NHSM-CONN-XC",
"pki_hsm_libfile=/usr/safenet/lunaclient/lib/libCryptoki2_64.so pki_hsm_modulename=thalesluna",
"/usr/safenet/lunaclient/bin/vtl verify The following Luna Slots/Partitions were found: Slot Serial # Label === =============== ===== 0 1209461834772 thaleslunaQE",
"pki_token_name=thaleslunaQE",
"mkdir -p /root/tmp1",
"certutil -N -d /root/tmp1",
"modutil -dbdir /root/tmp1 -nocertdb -add nfast -libfile /opt/nfast/toolkits/pkcs11/libcknfast.so --- Module \"nfast\" added to database.",
"modutil -dbdir ~/testLuna -nocertdb -add thalesluna -libfile /usr/safenet/lunaclient/lib/libCryptoki2_64.so --- Module \"thalesluna\" added to database.",
"modutil -dbdir /root/tmp1 -list 1. NSS Internal PKCS #11 Module ... token: NSS FIPS 140-2 Certificate DB ... 2. nfast ... token: accelerator ... token: NHSM-CONN-XC ...",
"certutil -L -d /root/tmp1 -h <token name>",
"/usr/sbin/getenforce Enforcing",
"/usr/sbin/sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Memory protection checking: actual (secure) Max kernel policy version: 33",
"/usr/sbin/setenforce 1 Enforcing",
"systemctl status firewalld",
"systemctl start firewalld systemctl enable firewalld",
"firewall-cmd --permanent --add-port={8080/tcp,8443/tcp,8009/tcp,8005/tcp}",
"firewall-cmd --permanent --add-port={389/tcp,636/tcp}",
"firewall-cmd --list-ports",
"firewall-cmd --reload",
"for port in 8080 8443 8009 8005 31080 31443 31009 31005 33080 33443 33009 33005 32080 32443 32009 32005 28080 28443 28009 28005 24080 24443 14009 14005 25080 25443 14019 14015; do semanage port -a -t http_port_t -p tcp USDport; done",
"for port in 389 636 7389 7636 6389 6636 12389 12636 11389 11636 5389 5636 22389 22636 16389 16636 17389 17636; do semanage port -a -t ldap_port_t -p tcp USDi; done",
"semanage port -l",
"dnf module enable redhat-pki",
"dnf install redhat-pki",
"idm-console-framework-1.3.0-1.module+el8pki+14677+1ef79a68.noarch.rpm jss-4.9.10-1.module+el8pki+21949+4b2d0700.x86_64.rpm jss-javadoc-4.9.10-1.module+el8pki+21949+4b2d0700.x86_64.rpm ldapjdk-4.23.0-1.module+el8pki+14677+1ef79a68.noarch.rpm ldapjdk-javadoc-4.23.0-1.module+el8pki+14677+1ef79a68.noarch.rpm python3-redhat-pki-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-10.13.11-1.module+el8pki+21949+4b2d0700.x86_64.rpm redhat-pki-acme-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-base-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-base-java-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-ca-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-console-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-console-theme-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-est-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-javadoc-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-kra-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-ocsp-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-server-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-server-theme-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-symkey-10.13.11-1.module+el8pki+21949+4b2d0700.x86_64.rpm redhat-pki-tks-10.13.11-1.module+el8pki+21949+4b2d0700.noarch.rpm redhat-pki-tools-10.13.11-1.module+el8pki+21949+4b2d0700.x86_64.rpm redhat-pki-tps-10.13.11-1.module+el8pki+21949+4b2d0700.x86_64.rpm tomcatjss-7.7.4-1.module+el8pki+21738+33a5e23b.noarch.rpm",
"dnf module install redhat-ds:11",
"mkdir -p /root/pki_rsa",
"mkdir -p /opt/pki_rsa",
"mkdir -p /root/pki_rsa/dirsrv",
"mkdir -p /opt/pki_rsa",
"mkdir -p /etc/dirsrv/save-rsa",
"cat /usr/share/pki/CS_SERVER_VERSION Red Hat Certificate System 10.4.3",
"cat /usr/share/pki/VERSION Name: pki Specification-Version: 10.13.11 Implementation-Version: 10.13.11-1.module+el8pki+21949+4b2d0700",
"dnf update",
"dnf update --downloadonly"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/prerequisites_and_preparation_for_installation |
Chapter 4. Monitoring OpenShift sandboxed containers | Chapter 4. Monitoring OpenShift sandboxed containers You can use the OpenShift Container Platform web console to monitor metrics related to the health status of your sandboxed workloads and nodes. OpenShift sandboxed containers has a pre-configured dashboard available in the web console, and administrators can also access and query raw metrics through Prometheus. 4.1. About OpenShift sandboxed containers metrics OpenShift sandboxed containers metrics enable administrators to monitor how their sandboxed containers are running. You can query for these metrics in Metrics UI in the web console. OpenShift sandboxed containers metrics are collected for the following categories: Kata agent metrics Kata agent metrics display information about the kata agent process running in the VM embedded in your sandboxed containers. These metrics include data from /proc/<pid>/[io, stat, status] . Kata guest OS metrics Kata guest OS metrics display data from the guest OS running in your sandboxed containers. These metrics include data from /proc/[stats, diskstats, meminfo, vmstats] and /proc/net/dev . Hypervisor metrics Hypervisor metrics display data regarding the hypervisor running the VM embedded in your sandboxed containers. These metrics mainly include data from /proc/<pid>/[io, stat, status] . Kata monitor metrics Kata monitor is the process that gathers metric data and makes it available to Prometheus. The kata monitor metrics display detailed information about the resource usage of the kata-monitor process itself. These metrics also include counters from Prometheus data collection. Kata containerd shim v2 metrics Kata containerd shim v2 metrics display detailed information about the kata shim process. These metrics include data from /proc/<pid>/[io, stat, status] and detailed resource usage metrics. 4.2. Viewing metrics for OpenShift sandboxed containers You can access the metrics for OpenShift sandboxed containers in the Metrics page in the web console. Prerequisites You have OpenShift Container Platform 4.11 installed. You have OpenShift sandboxed containers installed. You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. Procedure From the Administrator perspective in the web console, navigate to Observe Metrics . In the input field, enter the query for the metric you want to observe. All kata-related metrics begin with kata . Typing kata will display a list with all of the available kata metrics. The metrics from your query are visualized on the page. Additional resources For more information about creating PromQL queries to view metrics, see Querying metrics . 4.3. Viewing the OpenShift sandboxed containers dashboard You can access the OpenShift sandboxed containers dashboard in the Dashboards page in the web console. Prerequisites You have OpenShift Container Platform 4.11 installed. You have OpenShift sandboxed containers installed. You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. Procedure From the Administrator perspective in the web console, navigate to Observe Dashboards . From the Dashboard drop-down list, select the Sandboxed Containers dashboard. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Define the date and time range for the data you want to view. Click Save to save the custom time range. Optional: Select a Refresh Interval . The dashboard appears on the page with the following metrics from the Kata guest OS category: Number of running VMs Displays the total number of sandboxed containers running on your cluster. CPU Usage (per VM) Displays the CPU usage for each individual sandboxed container. Memory Usage (per VM) Displays the memory usage for each individual sandboxed container. Hover over each of the graphs within a dashboard to display detailed information about specific items. 4.4. Additional resources For more information about gathering data for support, see Gathering data about your cluster . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/sandboxed_containers_support_for_openshift/monitoring-sandboxed-containers |
Preface | Preface The release notes for Red Hat Trusted Application Pipeline summarize new features and enhancements, notable technical changes, features in Technology Preview, bug fixes, known issues, and other related advisories or information. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/release_notes_for_red_hat_trusted_application_pipeline_1.0/pr01 |
16.3. Using Logs | 16.3. Using Logs 16.3.1. Viewing Logs in the Console To troubleshoot the subsystem, check the error or informational messages that the server has logged. Examining the log files can also monitor many aspects of the server's operation. Some log files can be viewed through the Console. However, the audit log is only accessible by users with the Auditor role, using a method detailed in Section 16.3.2, "Using Signed Audit Logs" . To view the contents of a log file: Log into the Console. Select the Status tab. Under Logs , select the log to view. Set the viewing preferences in the Display Options section. Entries - The maximum number of entries to be displayed. When this limit is reached, the Certificate System returns any entries that match the search request. Zero (0) means no messages are returned. If the field is blank, the server returns every matching entry, regardless of the number found. Source - Select the Certificate System component or service for which log messages are to be displayed. Choosing All means messages logged by all components that log to this file are displayed. Level - Select a message category that represents the log level for filtering messages. Filename - Select the log file to view. Click Refresh . To view a full entry, double-click it, or select the entry, and click View . 16.3.2. Using Signed Audit Logs This section explains how a user in the Auditor group displays and verifies signed audit logs. 16.3.2.1. Listing Audit Logs As a user with auditor privileges, use the the pki subsystem -audit-file-find command to list existing audit log files on the server. For example, to list the audit log files on the CA hosted on server.example.com : The command uses the client certificate with the auditor nickname stored in the ~/.dogtag/nssdb/ directory for authenticating to the CA. For further details about the parameters used in the command and alternative authentication methods, see the pki (1) man page. 16.3.2.2. Downloading Audit Logs As a user with auditor privileges, use the pki subsystem -audit-file-retrieve command to download a specific audit log from the server. For example, to download an audit log file from the CA hosted on server.example.com : Optionally, list the available log files on the CA. See Section 16.3.2.1, "Listing Audit Logs" . Download the log file. For example, to download the ca_audit file: The command uses the client certificate with the auditor nickname stored in the ~/.dogtag/nssdb/ directory for authenticating to the CA. For further details about the parameters used in the command and alternative authentication methods, see the pki (1) man page. After downloading a log file, you can search for specific log entries, for example, using the grep utility: 16.3.2.3. Verifying Signed Audit Logs If audit log signing is enabled, users with auditor privileges can verify the logs: Initialize the NSS database and import the CA certificate. For details, see Section 2.5.1.1, "pki CLI Initialization" and the Importing a certificate into an NSS Database section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . If the audit signing certificate does not exist in the PKI client database, import it: Search the audit signing certificate for the subsystem logs you want to verify. For example: Import the audit signing certificate into the PKI client: Download the audit logs. See Section 16.3.2.2, "Downloading Audit Logs" . Verify the audit logs. Create a text file that contains a list of the audit log files you want to verify in chronological order. For example: Use the AuditVerify utility to verify the signatures. For example: For further details about using AuditVerify , see the AuditVerify (1) man page. 16.3.3. Displaying Operating System-level Audit Logs Note To see Operating System-level audit logs using the instructions below, the auditd logging framework must be configured per the Enabling OS-level Audit Logs section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . To display operating system-level access logs, use the ausearch utility as root or as a privileged user with the sudo utility. 16.3.3.1. Displaying Audit Log Deletion Events Since these events are keyed (with rhcs_audit_deletion ), use the -k parameter to find events matching that key: 16.3.3.2. Displaying Access to the NSS Database for Secret and Private Keys Since these events are keyed (with rhcs_audit_nssdb ), use the -k parameter to find events matching that key: 16.3.3.3. Displaying Time Change Events Since these events are keyed (with rhcs_audit_time_change ), use the -k parameter to find events matching that key: 16.3.3.4. Displaying Package Update Events Since these events are a typed message (of type SOFTWARE_UPDATE ), use the -m parameter to find events matching that type: 16.3.3.5. Displaying Changes to the PKI Configuration Since these events are keyed (with rhcs_audit_config ), use the -k parameter to find events matching that key: 16.3.4. Smart Card Error Codes Smart cards can report certain error codes to the TPS; these are recorded in the TPS's debug log file, depending on the cause for the message. Table 16.5. Smart Card Error Codes Return Code Description General Error Codes 6400 No specific diagnosis 6700 Wrong length in Lc 6982 Security status not satisfied 6985 Conditions of use not satisfied 6a86 Incorrect P1 P2 6d00 Invalid instruction 6e00 Invalid class Install Load Errors 6581 Memory Failure 6a80 Incorrect parameters in data field 6a84 Not enough memory space 6a88 Referenced data not found Delete Errors 6200 Application has been logically deleted 6581 Memory failure 6985 Referenced data cannot be deleted 6a88 Referenced data not found 6a82 Application not found 6a80 Incorrect values in command data Get Data Errors 6a88 Referenced data not found Get Status Errors 6310 More data available 6a88 Referenced data not found 6a80 Incorrect values in command data Load Errors 6581 Memory failure 6a84 Not enough memory space 6a86 Incorrect P1/P2 6985 Conditions of use not satisfied | [
"pki -h server.example.com -p 8443 -n auditor ca-audit-file-find ----------------- 3 entries matched ----------------- File name: ca_audit.20170331225716 Size: 2883 File name: ca_audit.20170401001030 Size: 189 File name: ca_audit Size: 6705 ---------------------------- Number of entries returned 3 ----------------------------",
"pki -U https://server.example.com:8443 -n auditor ca-audit-file-retrieve ca_audit",
"grep \"\\[AuditEvent=ACCESS_SESSION_ESTABLISH\\]\" log_file",
"pki ca-cert-find --name \"CA Audit Signing Certificate\" --------------- 1 entries found --------------- Serial Number: 0x5 Subject DN: CN=CA Audit Signing Certificate,O=EXAMPLE Status: VALID Type: X.509 version 3 Key Algorithm: PKCS #1 RSA with 2048-bit key Not Valid Before: Fri Jul 08 03:56:08 CEST 2016 Not Valid After: Thu Jun 28 03:56:08 CEST 2018 Issued On: Fri Jul 08 03:56:08 CEST 2016 Issued By: system ---------------------------- Number of entries returned 1 ----------------------------",
"pki client-cert-import \"CA Audit Signing Certificate\" --serial 0x5 --trust \",,P\" --------------------------------------------------- Imported certificate \"CA Audit Signing Certificate\" ---------------------------------------------------",
"cat > ~/audit.txt << EOF ca_audit.20170331225716 ca_audit.20170401001030 ca_audit EOF",
"AuditVerify -d ~/.dogtag/nssdb/ -n \"CA Audit Signing Certificate\" -a ~/audit.txt Verification process complete. Valid signatures: 10 Invalid signatures: 0",
"ausearch -k rhcs_audit_deletion",
"ausearch -k rhcs_audit_nssdb",
"ausearch -k rhcs_audit_time_change",
"ausearch -m SOFTWARE_UPDATE",
"ausearch -k rhcs_audit_config"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/using_logs |
Chapter 3. Installer-provisioned infrastructure | Chapter 3. Installer-provisioned infrastructure 3.1. Preparing to install a cluster on AWS You prepare to install an OpenShift Container Platform cluster on AWS by completing the following steps: Verifying internet connectivity for your cluster. Configuring an AWS account . Downloading the installation program. Note If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation . Installing the OpenShift CLI ( oc ). Note If you are installing in a disconnected environment, install oc to the mirror host. Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, manually creating long-term credentials for AWS or configuring an AWS cluster to use short-term credentials with Amazon Web Services Security Token Service (AWS STS). 3.1.1. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.1.2. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.1.3. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.1.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.1.5. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.2. Installing a cluster on AWS In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) that uses the default configuration options. 3.2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.2.2. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 3.2.3. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.2.4. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.2.5. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.3. Installing a cluster on AWS with customizations In OpenShift Container Platform version 4.16, you can install a customized cluster on infrastructure that the installation program provisions on Amazon Web Services (AWS). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes. 3.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.3.2. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 3.3.3. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 3.3.3.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.3.3.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.3.3.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.3.3.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 16 serviceEndpoints: 17 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 18 sshKey: ssh-ed25519 AAAA... 19 pullSecret: '{"auths": ...}' 20 1 12 14 20 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 17 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 18 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.3.3.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.3.4. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.3.4.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.3.4.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.3.4.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.3. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.4. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.3.4.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.3.4.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.3.4.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.3.4.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.3.6. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.3.7. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.3.8. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.4. Installing a cluster on AWS with network customizations In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 3.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.4.2. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 3.4.3. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 3.4.3.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.4.3.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.5. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.4.3.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.6. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.4.3.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: 13 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 14 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 15 propagateUserTags: true 16 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 19 sshKey: ssh-ed25519 AAAA... 20 pullSecret: '{"auths": ...}' 21 1 12 15 21 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 13 16 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 14 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 20 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.4.3.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.4.4. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.4.4.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.4.4.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.4.4.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.7. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.8. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.4.4.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.4.4.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.4.4.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.4.4.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.4.5. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.4.5.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.3. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.4. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.5. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.6. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.7. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.8. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.9. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.10. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 3.11. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 3.12. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 3.13. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 3.4.6. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. Note For more information on using a Network Load Balancer (NLB) on AWS, see Configuring Ingress cluster traffic on AWS using a Network Load Balancer . 3.4.7. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Create an Ingress Controller backed by an AWS NLB on a new cluster. Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Save the cluster-ingress-default-ingresscontroller.yaml file and quit the text editor. Optional: Back up the manifests/cluster-ingress-default-ingresscontroller.yaml file. The installation program deletes the manifests/ directory when creating the cluster. 3.4.8. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 3.4.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.4.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.4.12. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.5. Installing a cluster on AWS in a restricted network In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC). 3.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 3.5.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 3.5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.5.3. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.5.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.5.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.5.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.5.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.5.4. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the subnets for the VPC to install the cluster in: subnets: - subnet-1 - subnet-2 - subnet-3 Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 3.5.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.14. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.5.4.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 12 14 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 23 Provide the contents of the certificate file that you used for your mirror registry. 24 Provide the imageContentSources section from the output of the command to mirror the repository. 3.5.4.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.5.5. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.5.5.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.5.5.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.5.5.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.9. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.10. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.5.5.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.5.5.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.5.5.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.5.5.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.5.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.5.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.5.8. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 3.5.9. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . 3.6. Installing a cluster on AWS into an existing VPC In OpenShift Container Platform version 4.16, you can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 3.6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. If the existing VPC is owned by a different account than the cluster, you shared the VPC between accounts. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.6.2. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.6.2.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: Create a public and private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet. For an example of this type of configuration, see VPC with public and private subnets (NAT) in the AWS documentation. Record each subnet ID. Completing the installation requires that you enter these values in the platform section of the install-config.yaml file. See Finding a subnet ID in the AWS documentation. The VPC's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The subnet CIDR blocks must belong to the machine CIDR that you specify. The VPC must have a public internet gateway attached to it. For each availability zone: The public subnet requires a route to the internet gateway. The public subnet requires a NAT gateway with an EIP address. The private subnet requires a route to the NAT gateway in public subnet. The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.6.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.6.2.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.6.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.6.2.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.6.2.6. Modifying trust policy when installing into a shared VPC If you install your cluster using a shared VPC, you can use the Passthrough or Manual credentials mode. You must add the IAM role used to install the cluster as a principal in the trust policy of the account that owns the VPC. If you use Passthrough mode, add the Amazon Resource Name (ARN) of the account that creates the cluster, such as arn:aws:iam::123456789012:user/clustercreator , to the trust policy as a principal. If you use Manual mode, add the ARN of the account that creates the cluster as well as the ARN of the ingress operator role in the cluster owner account, such as arn:aws:iam::123456789012:role/<cluster-name>-openshift-ingress-operator-cloud-credentials , to the trust policy as principals. You must add the following actions to the policy: Example 3.11. Required actions for shared VPC installation route53:ChangeResourceRecordSets route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ChangeTagsForResource route53:GetAccountLimit route53:GetChange route53:GetHostedZone route53:ListTagsForResource route53:UpdateHostedZoneComment tag:GetResources tag:UntagResources 3.6.3. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 3.6.3.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.15. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.6.3.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.12. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.6.3.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.13. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.6.3.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths": ...}' 22 1 12 14 22 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.6.3.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.6.3.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.6.4. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.6.4.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.6.4.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.6.4.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.14. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.15. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.6.4.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.6.4.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.6.4.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.6.4.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.6.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.6.6. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.6.7. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.6.8. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . After installing a cluster on AWS into an existing VPC, you can extend the AWS VPC cluster into an AWS Outpost . 3.7. Installing a private cluster on AWS In OpenShift Container Platform version 4.16, you can install a private cluster into an existing VPC on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 3.7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.7.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 3.7.2.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 3.7.2.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 3.7.3. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.7.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.7.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.7.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.7.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.7.3.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.7.4. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 3.7.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.16. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.7.4.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.16. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.7.4.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.17. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.7.4.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{"auths": ...}' 23 1 12 14 23 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 3.7.4.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.7.4.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.7.5. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.7.5.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.7.5.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.7.5.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.18. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.19. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.7.5.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.7.5.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.7.5.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.7.5.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.7.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.7.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.7.8. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.7.9. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.8. Installing a cluster on AWS into a government region In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) into a government region. To configure the region, modify parameters in the install-config.yaml file before you install the cluster. 3.8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.8.2. AWS government regions OpenShift Container Platform supports deploying a cluster to an AWS GovCloud (US) region. The following AWS GovCloud partitions are supported: us-gov-east-1 us-gov-west-1 3.8.3. Installation requirements Before you can install the cluster, you must: Provide an existing private AWS VPC and subnets to host the cluster. Public zones are not supported in Route 53 in AWS GovCloud. As a result, clusters must be private when you deploy to an AWS government region. Manually create the installation configuration file ( install-config.yaml ). 3.8.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS GovCloud Region. Therefore, clusters must be private if they are deployed to an AWS GovCloud Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 3.8.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 3.8.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 3.8.5. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.8.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.8.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.8.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.8.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.8.5.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.8.6. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 3.8.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 3.8.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.17. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.8.7.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.20. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.8.7.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.21. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.8.7.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{"auths": ...}' 23 1 12 14 23 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 3.8.7.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.8.7.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.8.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Incorporating the Cloud Credential Operator utility manifests . 3.8.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.8.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.8.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.22. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.23. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.8.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.8.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.8.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.8.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.8.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.8.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.8.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.8.12. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.9. Installing a cluster on AWS into a Secret or Top Secret Region In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) into the following secret regions: Secret Commercial Cloud Services (SC2S) Commercial Cloud Services (C2S) To configure a cluster in either region, you change parameters in the install config.yaml file before you install the cluster. Warning In OpenShift Container Platform 4.16, the installation program uses Cluster API instead of Terraform to provision cluster infrastructure during installations on AWS. Installing a cluster on AWS into a secret or top-secret region by using the Cluster API implementation has not been tested as of the release of OpenShift Container Platform 4.16. This document will be updated when installation into a secret region has been tested. There is a known issue with Network Load Balancers' support for security groups in secret or top secret regions that causes installations in these regions to fail. For more information, see OCPBUGS-33311 . 3.9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multifactor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.9.2. AWS secret regions The following AWS secret partitions are supported: us-isob-east-1 (SC2S) us-iso-east-1 (C2S) Note The maximum supported MTU in an AWS SC2S and C2S Regions is not the same as AWS commercial. For more information about configuring MTU during installation, see the Cluster Network Operator configuration object section in Installing a cluster on AWS with network customizations 3.9.3. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amzaon Machine Image for the AWS Secret and Top Secret Regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. Important You must also define a custom CA certificate in the additionalTrustBundle field of the install-config.yaml file because the AWS API requires a custom CA trust bundle. To allow the installation program to access the AWS API, the CA certificates must also be defined on the machine that runs the installation program. You must add the CA bundle to the trust store on the machine, use the AWS_CA_BUNDLE environment variable, or define the CA bundle in the ca_bundle field of the AWS config file. 3.9.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS Top Secret Region. Therefore, clusters must be private if they are deployed to an AWS Top Secret Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 3.9.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 3.9.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 3.9.5. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.9.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. A cluster in an SC2S or C2S Region is unable to reach the public IP addresses for the EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.9.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.9.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.9.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.9.5.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.9.6. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.16.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 3.9.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 3.9.7.1. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.24. Machine types based on 64-bit x86 architecture for secret regions c4.* c5.* i3.* m4.* m5.* r4.* r5.* t3.* 3.9.7.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 25 The custom CA certificate. This is required when deploying to the SC2S or C2S Regions because the AWS API requires a custom CA trust bundle. 3.9.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.9.7.4. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.9.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.9.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.9.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.9.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.25. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.26. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.9.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.9.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.9.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.9.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.9.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.9.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.9.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.9.12. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.10. Installing a cluster on AWS China In OpenShift Container Platform version 4.16, you can install a cluster to the following Amazon Web Services (AWS) China regions: cn-north-1 (Beijing) cn-northwest-1 (Ningxia) 3.10.1. Prerequisites You have an Internet Content Provider (ICP) license. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. 3.10.2. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for the AWS China regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. 3.10.3. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network. Note AWS China does not support a VPN connection between the VPC and your network. For more information about the Amazon VPC service in the Beijing and Ningxia regions, see Amazon Virtual Private Cloud in the AWS China documentation. 3.10.3.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 3.10.3.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 3.10.4. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.10.4.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.10.4.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.10.4.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.10.4.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.10.4.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.10.5. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 1 The AWS profile name that holds your AWS credentials, like beijingadmin . Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 1 The AWS region, like cn-north-1 . Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 The RHCOS VMDK version, like 4.16.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 3.10.6. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 3.10.6.1. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 3.10.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.18. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.10.6.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.27. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.10.6.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.28. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.10.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.10.6.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.10.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.10.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.10.7.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.10.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.29. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.30. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.10.7.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.10.7.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.10.7.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.10.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.10.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.10.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.10.10. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.10.11. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.11. Installing a cluster with compute nodes on AWS Local Zones You can quickly install an OpenShift Container Platform cluster on Amazon Web Services (AWS) Local Zones by setting the zone names in the edge compute pool of the install-config.yaml file, or install a cluster in an existing Amazon Virtual Private Cloud (VPC) with Local Zone subnets. AWS Local Zones is an infrastructure that place Cloud Resources close to metropolitan regions. For more information, see the AWS Local Zones Documentation . 3.11.1. Infrastructure prerequisites You reviewed details about OpenShift Container Platform installation and update processes. You are familiar with Selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Warning If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster must access. You noted the region and supported AWS Local Zones locations to create the network resources in. You read the AWS Local Zones features in the AWS documentation. You added permissions for creating network resources that support AWS Local Zones to the Identity and Access Management (IAM) user or role. The following example enables a zone group that can provide a user or role access for creating network network resources that support AWS Local Zones. Example of an additional IAM policy with the ec2:ModifyAvailabilityZoneGroup permission attached to an IAM user or role. { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } 3.11.2. About AWS Local Zones and edge compute pool Read the following sections to understand infrastructure behaviors and cluster limitations in an AWS Local Zones environment. 3.11.2.1. Cluster limitations in AWS Local Zones Some limitations exist when you try to deploy a cluster with a default installation configuration in an Amazon Web Services (AWS) Local Zone. Important The following list details limitations when deploying a cluster in a pre-configured AWS zone: The maximum transmission unit (MTU) between an Amazon EC2 instance in a zone and an Amazon EC2 instance in the Region is 1300 . This causes the cluster-wide network MTU to change according to the network plugin that is used with the deployment. Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Network Address Translation (NAT) Gateways are not globally supported. For an OpenShift Container Platform cluster on AWS, the AWS Elastic Block Storage (EBS) gp3 type volume is the default for node volumes and the default for the storage class. This volume type is not globally available on zone locations. By default, the nodes running in zones are deployed with the gp2 EBS volume. The gp2-csi StorageClass parameter must be set when creating workloads on zone nodes. If you want the installation program to automatically create Local Zone subnets for your OpenShift Container Platform cluster, specific configuration limitations apply with this method. Important The following configuration limitation applies when you set the installation program to automatically create subnets for your OpenShift Container Platform cluster: When the installation program creates private subnets in AWS Local Zones, the program associates each subnet with the route table of its parent zone. This operation ensures that each private subnet can route egress traffic to the internet by way of NAT Gateways in an AWS Region. If the parent-zone route table does not exist during cluster installation, the installation program associates any private subnet with the first available private route table in the Amazon Virtual Private Cloud (VPC). This approach is valid only for AWS Local Zones subnets in an OpenShift Container Platform cluster. 3.11.2.2. About edge compute pools Edge compute nodes are tainted compute nodes that run in AWS Local Zones locations. When deploying a cluster that uses Local Zones, consider the following points: Amazon EC2 instances in the Local Zones are more expensive than Amazon EC2 instances in the Availability Zones. The latency is lower between the applications running in AWS Local Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Local Zones and Availability Zones. Important Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Local Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes . The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. For more information, see How Local Zones work in the AWS documentation. OpenShift Container Platform 4.12 introduced a new compute pool, edge , that is designed for use in remote zones. The edge compute pool configuration is common between AWS Local Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Local Zones resources, the default instance type can vary from the traditional compute pool. The default Elastic Block Store (EBS) for Local Zones locations is gp2 , which differs from the non-edge compute pool. The instance type used for each Local Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone. The edge compute pool creates new labels that developers can use to deploy applications onto AWS Local Zones nodes. The new labels are: node-role.kubernetes.io/edge='' machine.openshift.io/zone-type=local-zone machine.openshift.io/zone-group=USDZONE_GROUP_NAME By default, the machine sets for the edge compute pool define the taint of NoSchedule to prevent other workloads from spreading on Local Zones instances. Users can only run user workloads if they define tolerations in the pod specification. Additional resources MTU value selection Changing the MTU for the cluster network Understanding taints and tolerations Storage classes Ingress Controller sharding 3.11.3. Installation prerequisites Before you install a cluster in an AWS Local Zones environment, you must configure your infrastructure so that it can adopt Local Zone capabilities. 3.11.3.1. Opting in to an AWS Local Zones If you plan to create subnets in AWS Local Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster. You have attached a permissive IAM policy to a user or role account that opts in to the zone group. Procedure List the zones that are available in your AWS Region by running the following command: Example command for listing available AWS Local Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=local-zone \ --all-availability-zones Depending on the AWS Region, the list of available zones might be long. The command returns the following fields: ZoneName The name of the Local Zones. GroupName The group that comprises the zone. To opt in to the Region, save the name. Status The status of the Local Zones group. If the status is not-opted-in , you must opt in the GroupName as described in the step. Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ 1 --opt-in-status opted-in 1 Replace <value_of_GroupName> with the name of the group of the Local Zones where you want to create subnets. For example, specify us-east-1-nyc-1 to use the zone us-east-1-nyc-1a (US East New York). 3.11.3.2. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 3.11.4. Preparing for the installation Before you extend nodes to Local Zones, you must prepare certain resources for the cluster installation environment. 3.11.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.19. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 3.11.4.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Local Zones. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.31. Machine types based on 64-bit x86 architecture for AWS Local Zones c5.* c5d.* m6i.* m5.* r5.* t3.* Additional resources See AWS Local Zones features in the AWS documentation. 3.11.4.3. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.11.4.4. Examples of installation configuration files with edge compute pools The following examples show install-config.yaml files that contain an edge machine pool configuration. Configuration that uses an edge pool with a custom instance type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Instance types differ between locations. To verify availability in the Local Zones in which the cluster runs, see the AWS documentation. Configuration that uses an edge pool with a custom Amazon Elastic Block Store (EBS) type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-phx-2a rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Elastic Block Storage (EBS) types differ between locations. Check the AWS documentation to verify availability in the Local Zones in which the cluster runs. Configuration that uses an edge pool with custom security groups apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 Specify the name of the security group as it is displayed on the Amazon EC2 console. Ensure that you include the sg prefix. 3.11.4.5. Customizing the cluster network MTU Before you deploy a cluster on AWS, you can customize the cluster network maximum transmission unit (MTU) for your cluster network to meet the needs of your infrastructure. By default, when you install a cluster with supported Local Zones capabilities, the MTU value for the cluster network is automatically adjusted to the lowest value that the network plugin accepts. Important Setting an unsupported MTU value for EC2 instances that operate in the Local Zones infrastructure can cause issues for your OpenShift Container Platform cluster. If the Local Zone supports higher MTU values in between EC2 instances in the Local Zone and the AWS Region, you can manually configure the higher value to increase the network performance of the cluster network. You can customize the MTU for a cluster by specifying the networking.clusterNetworkMTU parameter in the install-config.yaml configuration file. Important All subnets in Local Zones must support the higher MTU value, so that each node in that zone can successfully communicate with services in the AWS Region and deploy your workloads. Example of overwriting the default MTU value apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: edge-zone networking: clusterNetworkMTU: 8901 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Additional resources For more information about the maximum supported maximum transmission unit (MTU) value, see AWS resources supported in Local Zones in the AWS documentation. 3.11.5. Cluster installation options for an AWS Local Zones environment Choose one of the following installation options to install an OpenShift Container Platform cluster on AWS with edge compute nodes defined in Local Zones: Fully automated option: Installing a cluster to quickly extend compute nodes to edge compute pools, where the installation program automatically creates infrastructure resources for the OpenShift Container Platform cluster. Existing VPC option: Installing a cluster on AWS into an existing VPC, where you supply Local Zones subnets to the install-config.yaml file. steps Choose one of the following options to install an OpenShift Container Platform cluster in an AWS Local Zones environment: Installing a cluster quickly in AWS Local Zones Installing a cluster in an existing VPC with defined AWS Local Zone subnets 3.11.6. Install a cluster quickly in AWS Local Zones For OpenShift Container Platform 4.16, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Local Zones locations. By using this installation route, the installation program automatically creates network resources and Local Zones subnets for each zone that you defined in your configuration file. To customize the installation, you must modify parameters in the install-config.yaml file before you deploy the cluster. 3.11.6.1. Modifying an installation configuration file to use AWS Local Zones Modify an install-config.yaml file to include AWS Local Zones. Prerequisites You have configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You are familiar with the configuration limitations that apply when you specify the installation program to automatically create subnets for your OpenShift Container Platform cluster. You opted in to the Local Zones group for each zone. You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml file by specifying Local Zones names in the platform.aws.zones property of the edge compute pool. # ... platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #... 1 The AWS Region name. 2 The list of Local Zones names that you use must exist in the same AWS Region specified in the platform.aws.region field. Example of a configuration to install a cluster in the us-west-2 AWS Region that extends edge nodes to Local Zones in Los Angeles and Las Vegas locations apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' #... Deploy your cluster. Additional resources Creating the installation configuration file Cluster limitations in AWS Local Zones steps Deploying the cluster 3.11.7. Installing a cluster in an existing VPC that has Local Zone subnets You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the install-config.yaml file before you install the cluster. Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS Local Zones. Local Zone subnets extend regular compute nodes to edge networks. Each edge compute nodes runs a user workload. After you create an Amazon Web Service (AWS) Local Zone environment, and you deploy your cluster, you can use edge compute nodes to create user workloads in Local Zone subnets. Note If you want to create private subnets, you must either modify the provided CloudFormation template or create your own template. You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company's policies. Important The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of OpenShift Container Platform. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. 3.11.7.1. Creating a VPC in AWS You can create a Virtual Private Cloud (VPC), and subnets for all Local Zones locations, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend compute nodes to edge locations. You can further customize your VPC to meet your requirements, including a VPN and route tables. You can also add new Local Zones subnets not included at initial deployment. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You opted in to the AWS Local Zones on your AWS account. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "3" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Go to the section of the documentation named "CloudFormation template for the VPC", and then copy the syntax from the provided template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path and the name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster. VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. PublicRouteTableId The ID of the new public route table ID. 3.11.7.2. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 3.32. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 3.11.7.3. Creating subnets in Local Zones Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create the subnets in Local Zones. Complete the following procedure for each Local Zone that you want to deploy compute nodes to. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Local Zones group. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" 9 1 <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<local_zone_shortname> . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{ZONE_NAME} is the value of Local Zones name to create the subnets. 5 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 6 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 7 USD{ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC's CloudFormation stack. 8 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. 3.11.7.4. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the private and public subnets in a zone on Local Zones infrastructure. Example 3.33. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 3.11.7.5. Modifying an installation configuration file to use AWS Local Zones subnets Modify your install-config.yaml file to include Local Zones subnets. Prerequisites You created subnets by using the procedure "Creating subnets in Local Zones". You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml configuration file by specifying Local Zones subnets in the platform.aws.subnets parameter. Example installation configuration file with Local Zones subnets # ... platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1 # ... 1 List of subnet IDs created in the zones: Availability and Local Zones. Additional resources For more information about viewing the CloudFormation stacks that you created, see AWS CloudFormation console . For more information about AWS profile and credential configuration, see Configuration and credential file settings in the AWS documentation. steps Deploying the cluster 3.11.8. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Edge compute pools and AWS Local Zones". 3.11.9. Optional: Assign public IP addresses to edge compute nodes If your workload requires deploying the edge compute nodes in public subnets on Local Zones infrastructure, you can configure the machine set manifests when installing a cluster. AWS Local Zones infrastructure accesses the network traffic in a specified zone, so applications can take advantage of lower latency when serving end users that are closer to that zone. The default setting that deploys compute nodes in private subnets might not meet your needs, so consider creating edge compute nodes in public subnets when you want to apply more customization to your infrastructure. Important By default, OpenShift Container Platform deploy the compute nodes in private subnets. For best performance, consider placing compute nodes in subnets that have their Public IP addresses attached to the subnets. You must create additional security groups, but ensure that you only open the groups' rules over the internet when you really need to. Procedure Change to the directory that contains the installation program and generate the manifest files. Ensure that the installation manifests get created at the openshift and manifests directory level. USD ./openshift-install create manifests --dir <installation_directory> Edit the machine set manifest that the installation program generates for the Local Zones, so that the manifest gets deployed in public subnets. Specify true for the spec.template.spec.providerSpec.value.publicIP parameter. Example machine set manifest configuration for installing a cluster quickly in Local Zones spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME} Example machine set manifest configuration for installing a cluster in an existing VPC that has Local Zones subnets apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true 3.11.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.11.11. Verifying the status of the deployed cluster Verify that your OpenShift Container Platform successfully deployed on AWS Local Zones. 3.11.11.1. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.11.11.2. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.11.11.3. Verifying nodes that were created with edge compute pool After you install a cluster that uses AWS Local Zones infrastructure, check the status of the machine that was created by the machine set manifests created during installation. To check the machine sets created from the subnet you added to the install-config.yaml file, run the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m To check the machines that were created from the machine sets, run the following command: USD oc get machines -n openshift-machine-api Example output To check nodes with edge roles, run the following command: USD oc get nodes -l node-role.kubernetes.io/edge Example output NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f steps Validating an installation . If necessary, you can opt out of remote health . 3.12. Installing a cluster with compute nodes on AWS Wavelength Zones You can quickly install an OpenShift Container Platform cluster on Amazon Web Services (AWS) Wavelength Zones by setting the zone names in the edge compute pool of the install-config.yaml file, or install a cluster in an existing Amazon Virtual Private Cloud (VPC) with Wavelength Zone subnets. AWS Wavelength Zones is an infrastructure that AWS configured for mobile edge computing (MEC) applications. A Wavelength Zone embeds AWS compute and storage services within the 5G network of a communication service provider (CSP). By placing application servers in a Wavelength Zone, the application traffic from your 5G devices can stay in the 5G network. The application traffic of the device reaches the target server directly, making latency a non-issue. Additional resources See Wavelength Zones in the AWS documentation. 3.12.1. Infrastructure prerequisites You reviewed details about OpenShift Container Platform installation and update processes. You are familiar with Selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Warning If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster must access. You noted the region and supported AWS Wavelength Zone locations to create the network resources in. You read AWS Wavelength features in the AWS documentation. You read the Quotas and considerations for Wavelength Zones in the AWS documentation. You added permissions for creating network resources that support AWS Wavelength Zones to the Identity and Access Management (IAM) user or role. For example: Example of an additional IAM policy that attached ec2:ModifyAvailabilityZoneGroup , ec2:CreateCarrierGateway , and ec2:DeleteCarrierGateway permissions to a user or role { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DeleteCarrierGateway", "ec2:CreateCarrierGateway" ], "Resource": "*" }, { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } 3.12.2. About AWS Wavelength Zones and edge compute pool Read the following sections to understand infrastructure behaviors and cluster limitations in an AWS Wavelength Zones environment. 3.12.2.1. Cluster limitations in AWS Wavelength Zones Some limitations exist when you try to deploy a cluster with a default installation configuration in an Amazon Web Services (AWS) Wavelength Zone. Important The following list details limitations when deploying a cluster in a pre-configured AWS zone: The maximum transmission unit (MTU) between an Amazon EC2 instance in a zone and an Amazon EC2 instance in the Region is 1300 . This causes the cluster-wide network MTU to change according to the network plugin that is used with the deployment. Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Network Address Translation (NAT) Gateways are not globally supported. For an OpenShift Container Platform cluster on AWS, the AWS Elastic Block Storage (EBS) gp3 type volume is the default for node volumes and the default for the storage class. This volume type is not globally available on zone locations. By default, the nodes running in zones are deployed with the gp2 EBS volume. The gp2-csi StorageClass parameter must be set when creating workloads on zone nodes. If you want the installation program to automatically create Wavelength Zone subnets for your OpenShift Container Platform cluster, specific configuration limitations apply with this method. The following note details some of these limitations. For other limitations, ensure that you read the "Quotas and considerations for Wavelength Zones" document that Red Hat provides in the "Infrastructure prerequisites" section. Important The following configuration limitation applies when you set the installation program to automatically create subnets for your OpenShift Container Platform cluster: When the installation program creates private subnets in AWS Wavelength Zones, the program associates each subnet with the route table of its parent zone. This operation ensures that each private subnet can route egress traffic to the internet by way of NAT Gateways in an AWS Region. If the parent-zone route table does not exist during cluster installation, the installation program associates any private subnet with the first available private route table in the Amazon Virtual Private Cloud (VPC). This approach is valid only for AWS Wavelength Zones subnets in an OpenShift Container Platform cluster. 3.12.2.2. About edge compute pools Edge compute nodes are tainted compute nodes that run in AWS Wavelength Zones locations. When deploying a cluster that uses Wavelength Zones, consider the following points: Amazon EC2 instances in the Wavelength Zones are more expensive than Amazon EC2 instances in the Availability Zones. The latency is lower between the applications running in AWS Wavelength Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Wavelength Zones and Availability Zones. Important Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Wavelength Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes . The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. For more information, see How AWS Wavelength work in the AWS documentation. OpenShift Container Platform 4.12 introduced a new compute pool, edge , that is designed for use in remote zones. The edge compute pool configuration is common between AWS Wavelength Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Wavelength Zones resources, the default instance type can vary from the traditional compute pool. The default Elastic Block Store (EBS) for Wavelength Zones locations is gp2 , which differs from the non-edge compute pool. The instance type used for each Wavelength Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone. The edge compute pool creates new labels that developers can use to deploy applications onto AWS Wavelength Zones nodes. The new labels are: node-role.kubernetes.io/edge='' machine.openshift.io/zone-type=wavelength-zone machine.openshift.io/zone-group=USDZONE_GROUP_NAME By default, the machine sets for the edge compute pool define the taint of NoSchedule to prevent other workloads from spreading on Wavelength Zones instances. Users can only run user workloads if they define tolerations in the pod specification. Additional resources MTU value selection Changing the MTU for the cluster network Understanding taints and tolerations Storage classes Ingress Controller sharding 3.12.3. Installation prerequisites Before you install a cluster in an AWS Wavelength Zones environment, you must configure your infrastructure so that it can adopt Wavelength Zone capabilities. 3.12.3.1. Opting in to an AWS Wavelength Zones If you plan to create subnets in AWS Wavelength Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster. You have attached a permissive IAM policy to a user or role account that opts in to the zone group. Procedure List the zones that are available in your AWS Region by running the following command: Example command for listing available AWS Wavelength Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=wavelength-zone \ --all-availability-zones Depending on the AWS Region, the list of available zones might be long. The command returns the following fields: ZoneName The name of the Wavelength Zones. GroupName The group that comprises the zone. To opt in to the Region, save the name. Status The status of the Wavelength Zones group. If the status is not-opted-in , you must opt in the GroupName as described in the step. Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ 1 --opt-in-status opted-in 1 Replace <value_of_GroupName> with the name of the group of the Wavelength Zones where you want to create subnets. As an example for Wavelength Zones, specify us-east-1-wl1 to use the zone us-east-1-wl1-nyc-wlz-1 (US East New York). 3.12.3.2. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 3.12.4. Preparing for the installation Before you extend nodes to Wavelength Zones, you must prepare certain resources for the cluster installation environment. 3.12.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.20. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 3.12.4.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Wavelength Zones. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.34. Machine types based on 64-bit x86 architecture for AWS Wavelength Zones r5.* t3.* Additional resources See AWS Wavelength features in the AWS documentation. 3.12.4.3. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.12.4.4. Examples of installation configuration files with edge compute pools The following examples show install-config.yaml files that contain an edge machine pool configuration. Configuration that uses an edge pool with a custom instance type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Instance types differ between locations. To verify availability in the Wavelength Zones in which the cluster runs, see the AWS documentation. Configuration that uses an edge pool with custom security groups apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 Specify the name of the security group as it is displayed on the Amazon EC2 console. Ensure that you include the sg prefix. 3.12.5. Cluster installation options for an AWS Wavelength Zones environment Choose one of the following installation options to install an OpenShift Container Platform cluster on AWS with edge compute nodes defined in Wavelength Zones: Fully automated option: Installing a cluster to quickly extend compute nodes to edge compute pools, where the installation program automatically creates infrastructure resources for the OpenShift Container Platform cluster. Existing VPC option: Installing a cluster on AWS into an existing VPC, where you supply Wavelength Zones subnets to the install-config.yaml file. steps Choose one of the following options to install an OpenShift Container Platform cluster in an AWS Wavelength Zones environment: Installing a cluster quickly in AWS Wavelength Zones Modifying an installation configuration file to use AWS Wavelength Zones 3.12.6. Install a cluster quickly in AWS Wavelength Zones For OpenShift Container Platform 4.16, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Wavelength Zones locations. By using this installation route, the installation program automatically creates network resources and Wavelength Zones subnets for each zone that you defined in your configuration file. To customize the installation, you must modify parameters in the install-config.yaml file before you deploy the cluster. 3.12.6.1. Modifying an installation configuration file to use AWS Wavelength Zones Modify an install-config.yaml file to include AWS Wavelength Zones. Prerequisites You have configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You are familiar with the configuration limitations that apply when you specify the installation program to automatically create subnets for your OpenShift Container Platform cluster. You opted in to the Wavelength Zones group for each zone. You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml file by specifying Wavelength Zones names in the platform.aws.zones property of the edge compute pool. # ... platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <wavelength_zone_name> #... 1 The AWS Region name. 2 The list of Wavelength Zones names that you use must exist in the same AWS Region specified in the platform.aws.region field. Example of a configuration to install a cluster in the us-west-2 AWS Region that extends edge nodes to Wavelength Zones in Los Angeles and Las Vegas locations apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-wl1-lax-wlz-1 - us-west-2-wl1-las-wlz-1 pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' #... Deploy your cluster. Additional resources Creating the installation configuration file Cluster limitations in AWS Wavelength Zones steps Deploying the cluster 3.12.7. Installing a cluster in an existing VPC that has Wavelength Zone subnets You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the install-config.yaml file before you install the cluster. Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS Wavelength Zones. You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company's policies. Important The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of OpenShift Container Platform. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. 3.12.7.1. Creating a VPC in AWS You can create a Virtual Private Cloud (VPC), and subnets for all Wavelength Zones locations, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend compute nodes to edge locations. You can further customize your VPC to meet your requirements, including a VPN and route tables. You can also add new Wavelength Zones subnets not included at initial deployment. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You opted in to the AWS Wavelength Zones on your AWS account. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "3" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Go to the section of the documentation named "CloudFormation template for the VPC", and then copy the syntax from the provided template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path and the name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster. VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. PublicRouteTableId The ID of the new public route table ID. 3.12.7.2. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 3.35. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 3.12.7.3. Creating a VPC carrier gateway To use public subnets in your OpenShift Container Platform cluster that runs on Wavelength Zones, you must create the carrier gateway and associate the carrier gateway to the VPC. Subnets are useful for deploying load balancers or edge compute nodes. To create edge nodes or internet-facing load balancers in Wavelength Zones locations for your OpenShift Container Platform cluster, you must create the following required network components: A carrier gateway that associates to the existing VPC. A carrier route table that lists route entries. A subnet that associates to the carrier route table. Carrier gateways exist for VPCs that only contain subnets in a Wavelength Zone. The following list explains the functions of a carrier gateway in the context of an AWS Wavelength Zones location: Provides connectivity between your Wavelength Zone and the carrier network, which includes any available devices from the carrier network. Performs Network Address Translation (NAT) functions, such as translating IP addresses that are public IP addresses stored in a network border group, from Wavelength Zones to carrier IP addresses. These translation functions apply to inbound and outbound traffic. Authorizes inbound traffic from a carrier network that is located in a specific location. Authorizes outbound traffic to a carrier network and the internet. Note No inbound connection configuration exists from the internet to a Wavelength Zone through the carrier gateway. You can use the provided CloudFormation template to create a stack of the following AWS resources: One carrier gateway that associates to the VPC ID in the template. One public route table for the Wavelength Zone named as <ClusterName>-public-carrier . Default IPv4 route entry in the new route table that targets the carrier gateway. VPC gateway endpoint for an AWS Simple Storage Service (S3). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . Procedure Go to the section of the documentation named "CloudFormation template for the VPC Carrier Gateway", and then copy the syntax from the CloudFormation template for VPC Carrier Gateway template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \// ParameterKey=VpcId,ParameterValue="USD{VpcId}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{ClusterName}" 4 1 <stack_name> is the name for the CloudFormation stack, such as clusterName-vpc-carrier-gw . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 <VpcId> is the VPC ID extracted from the CloudFormation stack output created in the section named "Creating a VPC in AWS". 4 <ClusterName> is a custom value that prefixes to resources that the CloudFormation stack creates. You can use the same name that is defined in the metadata.name section of the install-config.yaml configuration file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f Verification Confirm that the CloudFormation template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameter. Ensure that you provide the parameter value to the other CloudFormation templates that you run to create for your cluster. PublicRouteTableId The ID of the Route Table in the Carrier infrastructure. Additional resources See Amazon S3 in the AWS documentation. 3.12.7.4. CloudFormation template for the VPC Carrier Gateway You can use the following CloudFormation template to deploy the Carrier Gateway on AWS Wavelength infrastructure. Example 3.36. CloudFormation template for VPC Carrier Gateway AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: "AWS::EC2::CarrierGateway" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "cagw"]] PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public-carrier"]] PublicRoute: Type: "AWS::EC2::Route" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable 3.12.7.5. Creating subnets in Wavelength Zones Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create the subnets in Wavelength Zones. Complete the following procedure for each Wavelength Zone that you want to deploy compute nodes to. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Wavelength Zones group. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" 9 1 <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<wavelength_zone_shortname> . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{ZONE_NAME} is the value of Wavelength Zones name to create the subnets. 5 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 6 USD{ROUTE_TABLE_PUB} is the PublicRouteTableId extracted from the output of the VPC's carrier gateway CloudFormation stack. 7 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 8 USD{ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC's CloudFormation stack. 9 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. 3.12.7.6. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the private and public subnets in a zone on Wavelength Zones infrastructure. Example 3.37. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] 3.12.7.7. Modifying an installation configuration file to use AWS Wavelength Zones subnets Modify your install-config.yaml file to include Wavelength Zones subnets. Prerequisites You created subnets by using the procedure "Creating subnets in Wavelength Zones". You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml configuration file by specifying Wavelength Zones subnets in the platform.aws.subnets parameter. Example installation configuration file with Wavelength Zones subnets # ... platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicOrPrivateSubnetID-Wavelength-1 # ... 1 List of subnet IDs created in the zones: Availability and Wavelength Zones. Additional resources For more information about viewing the CloudFormation stacks that you created, see AWS CloudFormation console . For more information about AWS profile and credential configuration, see Configuration and credential file settings in the AWS documentation. steps Deploying the cluster 3.12.8. Optional: Assign public IP addresses to edge compute nodes If your workload requires deploying the edge compute nodes in public subnets on Wavelength Zones infrastructure, you can configure the machine set manifests when installing a cluster. AWS Wavelength Zones infrastructure accesses the network traffic in a specified zone, so applications can take advantage of lower latency when serving end users that are closer to that zone. The default setting that deploys compute nodes in private subnets might not meet your needs, so consider creating edge compute nodes in public subnets when you want to apply more customization to your infrastructure. Important By default, OpenShift Container Platform deploy the compute nodes in private subnets. For best performance, consider placing compute nodes in subnets that have their Public IP addresses attached to the subnets. You must create additional security groups, but ensure that you only open the groups' rules over the internet when you really need to. Procedure Change to the directory that contains the installation program and generate the manifest files. Ensure that the installation manifests get created at the openshift and manifests directory level. USD ./openshift-install create manifests --dir <installation_directory> Edit the machine set manifest that the installation program generates for the Wavelength Zones, so that the manifest gets deployed in public subnets. Specify true for the spec.template.spec.providerSpec.value.publicIP parameter. Example machine set manifest configuration for installing a cluster quickly in Wavelength Zones spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME} Example machine set manifest configuration for installing a cluster in an existing VPC that has Wavelength Zones subnets apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true 3.12.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.12.10. Verifying the status of the deployed cluster Verify that your OpenShift Container Platform successfully deployed on AWS Wavelength Zones. 3.12.10.1. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.12.10.2. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.12.10.3. Verifying nodes that were created with edge compute pool After you install a cluster that uses AWS Wavelength Zones infrastructure, check the status of the machine that was created by the machine set manifests created during installation. To check the machine sets created from the subnet you added to the install-config.yaml file, run the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m To check the machines that were created from the machine sets, run the following command: USD oc get machines -n openshift-machine-api Example output To check nodes with edge roles, run the following command: USD oc get nodes -l node-role.kubernetes.io/edge Example output NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f steps Validating an installation . If necessary, you can opt out of remote health . 3.13. Extending an AWS VPC cluster into an AWS Outpost In OpenShift Container Platform version 4.14, you could install a cluster on Amazon Web Services (AWS) with compute nodes running in AWS Outposts as a Technology Preview. As of OpenShift Container Platform version 4.15, this installation method is no longer supported. Instead, you can install a cluster on AWS into an existing VPC, and provision compute nodes on AWS Outposts as a postinstallation configuration task. After installing a cluster on Amazon Web Services (AWS) into an existing Amazon Virtual Private Cloud (VPC) , you can create a compute machine set that deploys compute machines in AWS Outposts. AWS Outposts is an AWS edge compute service that enables using many features of a cloud-based AWS deployment with the reduced latency of an on-premise environment. For more information, see the AWS Outposts documentation . 3.13.1. AWS Outposts on OpenShift Container Platform requirements and limitations You can manage the resources on your AWS Outpost similarly to those on a cloud-based AWS cluster if you configure your OpenShift Container Platform cluster to accommodate the following requirements and limitations: To extend an OpenShift Container Platform cluster on AWS into an Outpost, you must have installed the cluster into an existing Amazon Virtual Private Cloud (VPC). The infrastructure of an Outpost is tied to an availability zone in an AWS region and uses a dedicated subnet. Edge compute machines deployed into an Outpost must use the Outpost subnet and the availability zone that the Outpost is tied to. When the AWS Kubernetes cloud controller manager discovers an Outpost subnet, it attempts to create service load balancers in the Outpost subnet. AWS Outposts do not support running service load balancers. To prevent the cloud controller manager from creating unsupported services in the Outpost subnet, you must include the kubernetes.io/cluster/unmanaged tag in the Outpost subnet configuration. This requirement is a workaround in OpenShift Container Platform version 4.16. For more information, see OCPBUGS-30041 . OpenShift Container Platform clusters on AWS include the gp3-csi and gp2-csi storage classes. These classes correspond to Amazon Elastic Block Store (EBS) gp3 and gp2 volumes. OpenShift Container Platform clusters use the gp3-csi storage class by default, but AWS Outposts does not support EBS gp3 volumes. This implementation uses the node-role.kubernetes.io/outposts taint to prevent spreading regular cluster workloads to the Outpost nodes. To schedule user workloads in the Outpost, you must specify a corresponding toleration in the Deployment resource for your application. Reserving the AWS Outpost infrastructure for user workloads avoids additional configuration requirements, such as updating the default CSI to gp2-csi so that it is compatible. To create a volume in the Outpost, the CSI driver requires the Outpost Amazon Resource Name (ARN). The driver uses the topology keys stored on the CSINode objects to determine the Outpost ARN. To ensure that the driver uses the correct topology values, you must set the volume binding mode to WaitForConsumer and avoid setting allowed topologies on any new storage classes that you create. When you extend an AWS VPC cluster into an Outpost, you have two types of compute resources. The Outpost has edge compute nodes, while the VPC has cloud-based compute nodes. The cloud-based AWS Elastic Block volume cannot attach to Outpost edge compute nodes, and the Outpost volumes cannot attach to cloud-based compute nodes. As a result, you cannot use CSI snapshots to migrate applications that use persistent storage from cloud-based compute nodes to edge compute nodes or directly use the original persistent volume. To migrate persistent storage data for applications, you must perform a manual backup and restore operation. AWS Outposts does not support AWS Network Load Balancers or AWS Classic Load Balancers. You must use AWS Application Load Balancers to enable load balancing for edge compute resources in the AWS Outposts environment. To provision an Application Load Balancer, you must use an Ingress resource and install the AWS Load Balancer Operator. If your cluster contains both edge and cloud-based compute instances that share workloads, additional configuration is required. For more information, see "Using the AWS Load Balancer Operator in an AWS VPC cluster extended into an Outpost". Additional resources Using the AWS Load Balancer Operator in an AWS VPC cluster extended into an Outpost 3.13.2. Obtaining information about your environment To extend an AWS VPC cluster to your Outpost, you must provide information about your OpenShift Container Platform cluster and your Outpost environment. You use this information to complete network configuration tasks and configure a compute machine set that creates compute machines in your Outpost. You can use command-line tools to gather the required details. 3.13.2.1. Obtaining information from your OpenShift Container Platform cluster You can use the OpenShift CLI ( oc ) to obtain information from your OpenShift Container Platform cluster. Tip You might find it convenient to store some or all of these values as environment variables by using the export command. Prerequisites You have installed an OpenShift Container Platform cluster into a custom VPC on AWS. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the infrastructure ID for the cluster by running the following command. Retain this value. USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructures.config.openshift.io cluster Obtain details about the compute machine sets that the installation program created by running the following commands: List the compute machine sets on your cluster: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m Display the Amazon Machine Image (AMI) ID for one of the listed compute machine sets. Retain this value. USD oc get machinesets.machine.openshift.io <compute_machine_set_name_1> \ -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}' Display the subnet ID for the AWS VPC cluster. Retain this value. USD oc get machinesets.machine.openshift.io <compute_machine_set_name_1> \ -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet.id}' 3.13.2.2. Obtaining information from your AWS account You can use the AWS CLI ( aws ) to obtain information from your AWS account. Tip You might find it convenient to store some or all of these values as environment variables by using the export command. Prerequisites You have an AWS Outposts site with the required hardware setup complete. Your Outpost is connected to your AWS account. You have access to your AWS account by using the AWS CLI ( aws ) as a user with permissions to perform the required tasks. Procedure List the Outposts that are connected to your AWS account by running the following command: USD aws outposts list-outposts Retain the following values from the output of the aws outposts list-outposts command: The Outpost ID. The Amazon Resource Name (ARN) for the Outpost. The Outpost availability zone. Note The output of the aws outposts list-outposts command includes two values related to the availability zone: AvailabilityZone and AvailabilityZoneId . You use the AvailablilityZone value to configure a compute machine set that creates compute machines in your Outpost. Using the value of the Outpost ID, show the instance types that are available in your Outpost by running the following command. Retain the values of the available instance types. USD aws outposts get-outpost-instance-types \ --outpost-id <outpost_id_value> Using the value of the Outpost ARN, show the subnet ID for the Outpost by running the following command. Retain this value. USD aws ec2 describe-subnets \ --filters Name=outpost-arn,Values=<outpost_arn_value> 3.13.3. Configuring your network for your Outpost To extend your VPC cluster into an Outpost, you must complete the following network configuration tasks: Change the Cluster Network MTU. Create a subnet in your Outpost. 3.13.3.1. Changing the cluster network MTU to support AWS Outposts During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet. Important The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect. For more details about the migration process, including important service interruption considerations, see "Changing the MTU for the cluster network" in the additional resources for this procedure. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster using an account with cluster-admin permissions. You have identified the target MTU for your cluster. The MTU for the OVN-Kubernetes network plugin must be set to 100 less than the lowest hardware MTU value in your cluster. The MTU for the OpenShift SDN network plugin must be set to 50 less than the lowest hardware MTU value in your cluster. Procedure To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23 ... To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value of <machine_to> . For OVN-Kubernetes, this value must be 100 less than the value of <machine_to> . For OpenShift SDN, this value must be 50 less than the value of <machine_to> . <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that decreases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 1000 } , "machine": { "to" : 1100} } } } }' As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh Finalize the MTU migration for your plugin. In both example commands, <mtu> specifies the new cluster network MTU that you specified with <overlay_to> . To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' To finalize the MTU migration, enter the following command for the OpenShift SDN network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}' After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification Verify that the node in your cluster uses the MTU that you specified by entering the following command: USD oc describe network.config cluster Additional resources Changing the MTU for the cluster network 3.13.3.2. Creating subnets for AWS edge compute services Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create a subnet in AWS Outposts. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You have obtained the required information about your environment from your OpenShift Container Platform cluster, Outpost, and AWS account. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" \ 9 ParameterKey=PrivateSubnetLabel,ParameterValue="private-outpost" \ ParameterKey=PublicSubnetLabel,ParameterValue="public-outpost" \ ParameterKey=OutpostArn,ParameterValue="USD{OUTPOST_ARN}" 10 1 <stack_name> is the name for the CloudFormation stack, such as cluster-<outpost_name> . 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 5 USD{ZONE_NAME} is the value of AWS Outposts name to create the subnets. 6 USD{ROUTE_TABLE_PUB} is the Public Route Table ID created in the USD{VPC_ID} used to associate the public subnets on Outposts. Specify the public route table to associate the Outpost subnet created by this stack. 7 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 8 USD{ROUTE_TABLE_PVT} is the Private Route Table ID created in the USD{VPC_ID} used to associate the private subnets on Outposts. Specify the private route table to associate the Outpost subnet created by this stack. 9 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . 10 USD{OUTPOST_ARN} is the Amazon Resource Name (ARN) for the Outpost. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters: PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. 3.13.3.3. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the Outpost subnet. Example 3.38. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String PrivateSubnetLabel: Default: "private" Description: Subnet label to be added when building the subnet name. Type: String PublicSubnetLabel: Default: "public" Description: Subnet label to be added when building the subnet name. Type: String OutpostArn: Default: "" Description: OutpostArn when creating subnets on AWS Outpost. Type: String Conditions: OutpostEnabled: !Not [!Equals [!Ref "OutpostArn", ""]] Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref "AWS::NoValue"] Tags: - Key: Name Value: !Join ['-', [ !Ref ClusterName, !Ref PublicSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 1 Value: true PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref "AWS::NoValue"] Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, !Ref PrivateSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 2 Value: true PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] 1 You must include the kubernetes.io/cluster/unmanaged tag in the public subnet configuration for AWS Outposts. 2 You must include the kubernetes.io/cluster/unmanaged tag in the private subnet configuration for AWS Outposts. 3.13.4. Creating a compute machine set that deploys edge compute machines on an Outpost To create edge compute machines on AWS Outposts, you must create a new compute machine set with a compatible configuration. Prerequisites You have an AWS Outposts site. You have installed an OpenShift Container Platform cluster into a custom VPC on AWS. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m Record the names of the existing compute machine sets. Create a YAML file that contains the values for a new compute machine set custom resource (CR) by using one of the following methods: Copy an existing compute machine set configuration into a new file by running the following command: USD oc get machinesets.machine.openshift.io <original_machine_set_name_1> \ -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml You can edit this YAML file with your preferred text editor. Create an empty YAML file named <new_machine_set_name_1>.yaml with your preferred text editor and include the required values for your new compute machine set. If you are not sure which value to set for a specific field, you can view values of an existing compute machine set CR by running the following command: USD oc get machinesets.machine.openshift.io <original_machine_set_name_1> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> spec: providerSpec: 3 # ... 1 The cluster infrastructure ID. 2 A default node label. For AWS Outposts, you use the outposts role. 3 The omitted providerSpec section includes values that must be configured for your Outpost. Configure the new compute machine set to create edge compute machines in the Outpost by editing the <new_machine_set_name_1>.yaml file: Example compute machine set for AWS Outposts apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-outposts-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: outposts machine.openshift.io/cluster-api-machine-type: outposts machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> spec: metadata: labels: node-role.kubernetes.io/outposts: "" location: outposts providerSpec: value: ami: id: <ami_id> 3 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: volumeSize: 120 volumeType: gp2 4 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m5.xlarge 5 kind: AWSMachineProviderConfig placement: availabilityZone: <availability_zone> region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg subnet: id: <subnet_id> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned userDataSecret: name: worker-user-data taints: 8 - key: node-role.kubernetes.io/outposts effect: NoSchedule 1 Specifies the cluster infrastructure ID. 2 Specifies the name of the compute machine set. The name is composed of the cluster infrastructure ID, the outposts role name, and the Outpost availability zone. 3 Specifies the Amazon Machine Image (AMI) ID. 4 Specifies the EBS volume type. AWS Outposts requires gp2 volumes. 5 Specifies the AWS instance type. You must use an instance type that is configured in your Outpost. 6 Specifies the AWS region in which the Outpost availability zone exists. 7 Specifies the dedicated subnet for your Outpost. 8 Specifies a taint to prevent workloads from being scheduled on nodes that have the node-role.kubernetes.io/outposts label. To schedule user workloads in the Outpost, you must specify a corresponding toleration in the Deployment resource for your application. Save your changes. Create a compute machine set CR by running the following command: USD oc create -f <new_machine_set_name_1>.yaml Verification To verify that the compute machine set is created, list the compute machine sets in your cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m To list the machines that are managed by the new compute machine set, run the following command: USD oc get -n openshift-machine-api machines.machine.openshift.io \ -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned m5.xlarge us-east-1 us-east-1a 25s <machine_from_new_2> Provisioning m5.xlarge us-east-1 us-east-1a 25s To verify that a machine created by the new compute machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine <machine_from_new_1> -n openshift-machine-api 3.13.5. Creating user workloads in an Outpost After you extend an OpenShift Container Platform in an AWS VPC cluster into an Outpost, you can use edge compute nodes with the label node-role.kubernetes.io/outposts to create user workloads in the Outpost. Prerequisites You have extended an AWS VPC cluster into an Outpost. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created a compute machine set that deploys edge compute machines compatible with the Outpost environment. Procedure Configure a Deployment resource file for an application that you want to deploy to the edge compute node in the edge subnet. Example Deployment manifest kind: Namespace apiVersion: v1 metadata: name: <application_name> 1 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <application_name> namespace: <application_namespace> 2 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 3 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment metadata: name: <application_name> namespace: <application_namespace> spec: selector: matchLabels: app: <application_name> replicas: 1 template: metadata: labels: app: <application_name> location: outposts 4 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 5 node-role.kubernetes.io/outpost: '' tolerations: 6 - key: "node-role.kubernetes.io/outposts" operator: "Equal" value: "" effect: "NoSchedule" containers: - image: openshift/origin-node command: - "/bin/socat" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"' imagePullPolicy: Always name: <application_name> ports: - containerPort: 8080 volumeMounts: - mountPath: "/mnt/storage" name: data volumes: - name: data persistentVolumeClaim: claimName: <application_name> 1 Specify a name for your application. 2 Specify a namespace for your application. The application namespace can be the same as the application name. 3 Specify the storage class name. For an edge compute configuration, you must use the gp2-csi storage class. 4 Specify a label to identify workloads deployed in the Outpost. 5 Specify the node selector label that targets edge compute nodes. 6 Specify tolerations that match the key and effects taints in the compute machine set for your edge compute machines. Set the value and operator tolerations as shown. Create the Deployment resource by running the following command: USD oc create -f <application_deployment>.yaml Configure a Service object that exposes a pod from a targeted edge compute node to services that run inside your edge network. Example Service manifest apiVersion: v1 kind: Service 1 metadata: name: <application_name> namespace: <application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <application_name> 1 Defines the service resource. 2 Specify the label type to apply to managed pods. Create the Service CR by running the following command: USD oc create -f <application_service>.yaml 3.13.6. Scheduling workloads on edge and cloud-based AWS compute resources When you extend an AWS VPC cluster into an Outpost, the Outpost uses edge compute nodes and the VPC uses cloud-based compute nodes. The following load balancer considerations apply to an AWS VPC cluster extended into an Outpost: Outposts cannot run AWS Network Load Balancers or AWS Classic Load Balancers, but a Classic Load Balancer for a VPC cluster extended into an Outpost can attach to the Outpost edge compute nodes. For more information, see Using AWS Classic Load Balancers in an AWS VPC cluster extended into an Outpost . To run a load balancer on an Outpost instance, you must use an AWS Application Load Balancer. You can use the AWS Load Balancer Operator to deploy an instance of the AWS Load Balancer Controller. The controller provisions AWS Application Load Balancers for Kubernetes Ingress resources. For more information, see Using the AWS Load Balancer Operator in an AWS VPC cluster extended into an Outpost . 3.13.6.1. Using AWS Classic Load Balancers in an AWS VPC cluster extended into an Outpost AWS Outposts infrastructure cannot run AWS Classic Load Balancers, but Classic Load Balancers in the AWS VPC cluster can target edge compute nodes in the Outpost if edge and cloud-based subnets are in the same availability zone. As a result, Classic Load Balancers on the VPC cluster might schedule pods on either of these node types. Scheduling the workloads on edge compute nodes and cloud-based compute nodes can introduce latency. If you want to prevent a Classic Load Balancer in the VPC cluster from targeting Outpost edge compute nodes, you can apply labels to the cloud-based compute nodes and configure the Classic Load Balancer to only schedule on nodes with the applied labels. Note If you do not need to prevent a Classic Load Balancer in the VPC cluster from targeting Outpost edge compute nodes, you do not need to complete these steps. Prerequisites You have extended an AWS VPC cluster into an Outpost. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created a user workload in the Outpost with tolerations that match the taints for your edge compute machines. Procedure Optional: Verify that the edge compute nodes have the location=outposts label by running the following command and verifying that the output includes only the edge compute nodes in your Outpost: USD oc get nodes -l location=outposts Label the cloud-based compute nodes in the VPC cluster with a key-value pair by running the following command: USD for NODE in USD(oc get node -l node-role.kubernetes.io/worker --no-headers | grep -v outposts | awk '{printUSD1}'); do oc label node USDNODE <key_name>=<value>; done where <key_name>=<value> is the label you want to use to distinguish cloud-based compute nodes. Example output node1.example.com labeled node2.example.com labeled node3.example.com labeled Optional: Verify that the cloud-based compute nodes have the specified label by running the following command and confirming that the output includes all cloud-based compute nodes in your VPC cluster: USD oc get nodes -l <key_name>=<value> Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4 node3.example.com Ready worker 7h v1.29.4 Configure the Classic Load Balancer service by adding the cloud-based subnet information to the annotations field of the Service manifest: Example service configuration apiVersion: v1 kind: Service metadata: labels: app: <application_name> name: <application_name> namespace: <application_namespace> annotations: service.beta.kubernetes.io/aws-load-balancer-subnets: <aws_subnet> 1 service.beta.kubernetes.io/aws-load-balancer-target-node-labels: <key_name>=<value> 2 spec: ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app: <application_name> type: LoadBalancer 1 Specify the subnet ID for the AWS VPC cluster. 2 Specify the key-value pair that matches the pair in the node label. Create the Service CR by running the following command: USD oc create -f <file_name>.yaml Verification Verify the status of the service resource to show the host of the provisioned Classic Load Balancer by running the following command: USD HOST=USD(oc get service <application_name> -n <application_namespace> --template='{{(index .status.loadBalancer.ingress 0).hostname}}') Verify the status of the provisioned Classic Load Balancer host by running the following command: USD curl USDHOST In the AWS console, verify that only the labeled instances appear as the targeted instances for the load balancer. 3.13.6.2. Using the AWS Load Balancer Operator in an AWS VPC cluster extended into an Outpost You can configure the AWS Load Balancer Operator to provision an AWS Application Load Balancer in an AWS VPC cluster extended into an Outpost. AWS Outposts does not support AWS Network Load Balancers. As a result, the AWS Load Balancer Operator cannot provision Network Load Balancers in an Outpost. You can create an AWS Application Load Balancer either in the cloud subnet or in the Outpost subnet. An Application Load Balancer in the cloud can attach to cloud-based compute nodes and an Application Load Balancer in the Outpost can attach to edge compute nodes. You must annotate Ingress resources with the Outpost subnet or the VPC subnet, but not both. Prerequisites You have extended an AWS VPC cluster into an Outpost. You have installed the OpenShift CLI ( oc ). You have installed the AWS Load Balancer Operator and created the AWS Load Balancer Controller. Procedure Configure the Ingress resource to use a specified subnet: Example Ingress resource configuration apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <application_name> annotations: alb.ingress.kubernetes.io/subnets: <subnet_id> 1 spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <application_name> port: number: 80 1 Specifies the subnet to use. To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID. To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones. Additional resources Creating the AWS Load Balancer Controller 3.13.7. Additional resources Installing a cluster on AWS into an existing VPC | [
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 16 serviceEndpoints: 17 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 18 sshKey: ssh-ed25519 AAAA... 19 pullSecret: '{\"auths\": ...}' 20",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: 13 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 14 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 15 propagateUserTags: true 16 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 19 sshKey: ssh-ed25519 AAAA... 20 pullSecret: '{\"auths\": ...}' 21",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"subnets: - subnet-1 - subnet-2 - subnet-3",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\": ...}' 22",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }",
"aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones",
"aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-phx-2a rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: edge-zone networking: clusterNetworkMTU: 8901 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #",
"apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]",
"platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1",
"./openshift-install create manifests --dir <installation_directory>",
"spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME}",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m",
"oc get machines -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-nyc-1a-wbclh Running c5d.2xlarge us-east-1 us-east-1-nyc-1a 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h",
"oc get nodes -l node-role.kubernetes.io/edge",
"NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DeleteCarrierGateway\", \"ec2:CreateCarrierGateway\" ], \"Resource\": \"*\" }, { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }",
"aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=wavelength-zone --all-availability-zones",
"aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <wavelength_zone_name> #",
"apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-wl1-lax-wlz-1 - us-west-2-wl1-las-wlz-1 pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters \\// ParameterKey=VpcId,ParameterValue=\"USD{VpcId}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{ClusterName}\" 4",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: \"AWS::EC2::CarrierGateway\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"cagw\"]] PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public-carrier\"]] PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]",
"platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicOrPrivateSubnetID-Wavelength-1",
"./openshift-install create manifests --dir <installation_directory>",
"spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME}",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m",
"oc get machines -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1-wbclh Running c5d.2xlarge us-east-1 us-east-1-wl1-nyc-wlz-1 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h",
"oc get nodes -l node-role.kubernetes.io/edge",
"NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructures.config.openshift.io cluster",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m",
"oc get machinesets.machine.openshift.io <compute_machine_set_name_1> -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}'",
"oc get machinesets.machine.openshift.io <compute_machine_set_name_1> -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet.id}'",
"aws outposts list-outposts",
"aws outposts get-outpost-instance-types --outpost-id <outpost_id_value>",
"aws ec2 describe-subnets --filters Name=outpost-arn,Values=<outpost_arn_value>",
"oc describe network.config cluster",
"Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 1000 } , \"machine\": { \"to\" : 1100} } } } }'",
"oc get machineconfigpools",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/mtu-migration.sh",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'",
"oc get machineconfigpools",
"oc describe network.config cluster",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" \\ 9 ParameterKey=PrivateSubnetLabel,ParameterValue=\"private-outpost\" ParameterKey=PublicSubnetLabel,ParameterValue=\"public-outpost\" ParameterKey=OutpostArn,ParameterValue=\"USD{OUTPOST_ARN}\" 10",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String PrivateSubnetLabel: Default: \"private\" Description: Subnet label to be added when building the subnet name. Type: String PublicSubnetLabel: Default: \"public\" Description: Subnet label to be added when building the subnet name. Type: String OutpostArn: Default: \"\" Description: OutpostArn when creating subnets on AWS Outpost. Type: String Conditions: OutpostEnabled: !Not [!Equals [!Ref \"OutpostArn\", \"\"]] Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref \"AWS::NoValue\"] Tags: - Key: Name Value: !Join ['-', [ !Ref ClusterName, !Ref PublicSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 1 Value: true PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref \"AWS::NoValue\"] Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, !Ref PrivateSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 2 Value: true PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m",
"oc get machinesets.machine.openshift.io <original_machine_set_name_1> -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml",
"oc get machinesets.machine.openshift.io <original_machine_set_name_1> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> spec: providerSpec: 3",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-outposts-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: outposts machine.openshift.io/cluster-api-machine-type: outposts machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> spec: metadata: labels: node-role.kubernetes.io/outposts: \"\" location: outposts providerSpec: value: ami: id: <ami_id> 3 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: volumeSize: 120 volumeType: gp2 4 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m5.xlarge 5 kind: AWSMachineProviderConfig placement: availabilityZone: <availability_zone> region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg subnet: id: <subnet_id> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned userDataSecret: name: worker-user-data taints: 8 - key: node-role.kubernetes.io/outposts effect: NoSchedule",
"oc create -f <new_machine_set_name_1>.yaml",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m",
"oc get -n openshift-machine-api machines.machine.openshift.io -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1>",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned m5.xlarge us-east-1 us-east-1a 25s <machine_from_new_2> Provisioning m5.xlarge us-east-1 us-east-1a 25s",
"oc describe machine <machine_from_new_1> -n openshift-machine-api",
"kind: Namespace apiVersion: v1 metadata: name: <application_name> 1 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <application_name> namespace: <application_namespace> 2 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 3 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment metadata: name: <application_name> namespace: <application_namespace> spec: selector: matchLabels: app: <application_name> replicas: 1 template: metadata: labels: app: <application_name> location: outposts 4 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 5 node-role.kubernetes.io/outpost: '' tolerations: 6 - key: \"node-role.kubernetes.io/outposts\" operator: \"Equal\" value: \"\" effect: \"NoSchedule\" containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: <application_name> ports: - containerPort: 8080 volumeMounts: - mountPath: \"/mnt/storage\" name: data volumes: - name: data persistentVolumeClaim: claimName: <application_name>",
"oc create -f <application_deployment>.yaml",
"apiVersion: v1 kind: Service 1 metadata: name: <application_name> namespace: <application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <application_name>",
"oc create -f <application_service>.yaml",
"oc get nodes -l location=outposts",
"for NODE in USD(oc get node -l node-role.kubernetes.io/worker --no-headers | grep -v outposts | awk '{printUSD1}'); do oc label node USDNODE <key_name>=<value>; done",
"node1.example.com labeled node2.example.com labeled node3.example.com labeled",
"oc get nodes -l <key_name>=<value>",
"NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4 node3.example.com Ready worker 7h v1.29.4",
"apiVersion: v1 kind: Service metadata: labels: app: <application_name> name: <application_name> namespace: <application_namespace> annotations: service.beta.kubernetes.io/aws-load-balancer-subnets: <aws_subnet> 1 service.beta.kubernetes.io/aws-load-balancer-target-node-labels: <key_name>=<value> 2 spec: ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app: <application_name> type: LoadBalancer",
"oc create -f <file_name>.yaml",
"HOST=USD(oc get service <application_name> -n <application_namespace> --template='{{(index .status.loadBalancer.ingress 0).hostname}}')",
"curl USDHOST",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <application_name> annotations: alb.ingress.kubernetes.io/subnets: <subnet_id> 1 spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <application_name> port: number: 80"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_aws/installer-provisioned-infrastructure |
Chapter 6. View OpenShift Data Foundation Topology | Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_red_hat_virtualization_platform/viewing-odf-topology_rhodf |
Chapter 6. Subscriptions | Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/planning_your_deployment/subscriptions_rhodf |
Chapter 5. Red Hat Process Automation Manager Spring Boot configuration | Chapter 5. Red Hat Process Automation Manager Spring Boot configuration After you create your Spring Boot project, you can configure several components to customize your application. 5.1. Configuring REST endpoints for Spring Boot applications After you create your Spring Boot project, you can configure the host, port, and path for the REST endpoint for your Spring Boot application. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the application.properties file in a text editor. Configure the host, port, and path for the REST endpoints, where <ADDRESS> is the server address and <PORT> is the server port: server.address=<ADDRESS> server.port=<PORT> cxf.path=/rest The following example adds the REST endpoint to the address localhost on port 8090 . server.address=localhost server.port=8090 cxf.path=/rest 5.2. Configuring the KIE Server identity After you create your Spring Boot project, you can configure KIE Server so that it can be easily identified. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the application.properties file in a text editor. Configure the KIE Server parameters as shown in the following example: kieserver.serverId=<BUSINESS-APPLICATION>-service kieserver.serverName=<BUSINESS-APPLICATION>-service kieserver.location=http://localhost:8090/rest/server kieserver.controllers=http://localhost:8080/business-central/rest/controller The following table describes the KIE Server parameters that you can configure in your business project: Table 5.1. kieserver parameters Parameter Values Description kieserver.serverId string The ID used to identify the business application when connecting to the Process Automation Manager controller. kieserver.serverName string The name used to identify the business application when it connects to the Process Automation Manager controller. Can be the same string used for the kieserver.serverId parameter. kieserver.location URL Used by other components that use the REST API to identify the location of this server. Do not use the location as defined by server.address and server.port . kieserver.controllers URLs A comma-separated list of controller URLs. 5.3. Integrating Apache Kafka with your Red Hat Process Automation Manager Spring Boot project Apache Kafka is a distributed data streaming platform that can publish, subscribe to, store, and process streams of records in real time. It is designed to handle data streams from multiple sources and deliver them to multiple consumers. Apache Kafka is an alternative to a traditional enterprise messaging system. You can integrate Apache Kafka with your Red Hat Process Automation Manager Spring Boot project. Prerequisites You have an existing Red Hat Process Automation Manager Spring Boot project. Procedure In your Spring Boot project directory, open the business-application-service/src/main/resources/application.properties file. Add the kieserver.kafka.enabled system property with value true : kieserver.kafka.enabled=true Additional resources Integrating Red Hat Process Automation Manager with Red Hat AMQ Streams 5.4. Configuring KIE Server components to start at runtime If you selected Business Automation when you created your Spring Boot business application, you can specify which KIE Server components must start at runtime. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the application.properties file in a text editor. To set a component to start at runtime, set the value of the component to true. The following table lists the components that you can set to start at runtime: Table 5.2. kieserver capabilities parameters Parameter Values Description kieserver.drools.enabled true, false Enables or disables the Decision Manager component. kieserver.dmn.enabled true, false Enables or disables the Decision Model and Notation (DMN) component. kieserver.jbpm.enabled true, false Enables or disables the Red Hat Process Automation Manager component. kieserver.jbpmui.enabled true, false Enables or disables the Red Hat Process Automation Manager UI component. kieserver.casemgmt.enabled true, false Enables or disables the case management component. 5.5. Configuring your Spring Boot application for asynchronous execution After you create your Spring Boot project, you can use the jbpm.executor parameters to enable asynchronous execution. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the application.properties file in a text editor. To enable asynchronous execution, set the value of the jbpm.executor.enabled parameter to true , uncomment the other jbpm.executor parameters, and change the values as required, as shown in the following example: jbpm.executor.enabled=true jbpm.executor.retries=5 jbpm.executor.interval=0 jbpm.executor.threadPoolSize=1 jbpm.executor.timeUnit=SECONDS The following table describes the executor parameters that you can configure in your business project: Table 5.3. Executor parameters Parameter Values Description jbpm.executor.enabled true, false Disables or enables the executor component. jbpm.executor.retries integer Specifies the number of retries if errors occur while a job is running. jbpm.executor.interval integer Specifies the length of time that the executor uses to synchronize with the database. The unit of time is specified by the jbpm.executor.timeUnit parameter. Disabled by default (value 0 ). jbpm.executor.threadPoolSize integer Specifies the thread pool size. jbpm.executor.timeUnit string Specifies the time unit used to calculate the interval that the executor uses to synchronize with the database. The value must be a valid constant of java.util.concurrent.TimeUnit . The default value is SECONDS . 5.6. Configuring the business application for a cluster using Quartz If you plan to run your application in a cluster, you must configure the Quartz timer service. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Create the quartz.properties file and add the following content: #============================================================================ # Configure Main Scheduler Properties #============================================================================ org.quartz.scheduler.instanceName = SpringBootScheduler org.quartz.scheduler.instanceId = AUTO org.quartz.scheduler.skipUpdateCheck=true org.quartz.scheduler.idleWaitTime=1000 #============================================================================ # Configure ThreadPool #============================================================================ org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool org.quartz.threadPool.threadCount = 5 org.quartz.threadPool.threadPriority = 5 #============================================================================ # Configure JobStore #============================================================================ org.quartz.jobStore.misfireThreshold = 60000 org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT org.quartz.jobStore.driverDelegateClass=org.jbpm.process.core.timer.impl.quartz.DeploymentsAwareStdJDBCDelegate org.quartz.jobStore.useProperties=false org.quartz.jobStore.dataSource=myDS org.quartz.jobStore.nonManagedTXDataSource=notManagedDS org.quartz.jobStore.tablePrefix=QRTZ_ org.quartz.jobStore.isClustered=true org.quartz.jobStore.clusterCheckinInterval = 5000 #============================================================================ # Configure Datasources #============================================================================ org.quartz.dataSource.myDS.connectionProvider.class=org.jbpm.springboot.quartz.SpringConnectionProvider org.quartz.dataSource.myDS.dataSourceName=quartzDataSource org.quartz.dataSource.notManagedDS.connectionProvider.class=org.jbpm.springboot.quartz.SpringConnectionProvider org.quartz.dataSource.notManagedDS.dataSourceName=quartzNotManagedDataSource Note Data source names in the Quartz configuration file refer to Spring beans. The connection provider must be set to org.jbpm.springboot.quartz.SpringConnectionProvider to enable integration with Spring-based data sources. To enable the Quartz clustered timers and set the path of the quartz.properties file that you created in the step, include the following properties in the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resourcesapplication.properties file, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. jbpm.quartz.enabled=true jbpm.quartz.configuration=quartz.properties Create a managed and an unmanaged data source by adding the following content to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources/application.properties file: # enable to use database as storage jbpm.quartz.db=true quartz.datasource.name=quartz quartz.datasource.username=sa quartz.datasource.password=sa quartz.datasource.url=jdbc:h2:./target/spring-boot-jbpm;MVCC=true quartz.datasource.driver-class-name=org.h2.Driver # used to configure connection pool quartz.datasource.dbcp2.maxTotal=15 # used to initialize quartz schema quartz.datasource.initialization=true spring.datasource.schema=classpath*:<QUARTZ_TABLES_H2>.sql spring.datasource.initialization-mode=always In the preceding example, replace <QUARTZ_TABLES_H2> with the name of a Quartz H2 database schema script. The last three lines of the preceding configuration initialize the database schema. By default, Quartz requires two data sources: Managed data source to participate in the transaction of the decision engine or process engine Unmanaged data source to look up timers to trigger without any transaction handling Red Hat Process Automation Manager business applications assume that the Quartz database (schema) will be co-located with Red Hat Process Automation Manager tables and therefore produce data sources used for transactional operations for Quartz. The other (non transactional) data source must be configured but it should point to the same database as the main data source. 5.7. Configuring business application user group providers With Red Hat Process Automation Manager, you can manage human-centric activities. To provide integration with user and group repositories, you can use two KIE API entry points: UserGroupCallback : Responsible for verifying whether a user or group exists and for collecting groups for a specific user UserInfo : Responsible for collecting additional information about users and groups, for example email addresses and preferred language You can configure both of these components by providing alternative code, either code provided out of the box or custom developed code. For the UserGroupCallback component, retain the default implementation because it is based on the security context of the application. For this reason, it does not matter which backend store is used for authentication and authorisation (for example, RH-SSO). It will be automatically used as a source of information for collecting user and group information. The UserInfo component is a separate component because it collects more advanced information. Prerequisites You have a Spring Boot business application. Procedure To provide an alternative implementation of UserGroupCallback , add the following code to the Application class or a separate class annotated with @Configuration : @Bean(name = "userGroupCallback") public UserGroupCallback userGroupCallback(IdentityProvider identityProvider) throws IOException { return new MyCustomUserGroupCallback(identityProvider); } To provide an alternative implementation of UserInfo , add the following code to the Application class or a separate class annotated with @Configuration : @Bean(name = "userInfo") public UserInfo userInfo() throws IOException { return new MyCustomUserInfo(); } 5.8. Configuring a Spring Boot project with a MySQL or PostgreSQL database Red Hat Process Automation Manager business applications are generated with the default H2 database. You can change the database type to MySQL or PostgreSQL. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the application.properties file in a text editor. Configure your Spring Boot project to use a MySQL or PostgreSQL, complete one of the following set of steps: To configure your business application to use a MySQL database, locate the following parameters in the application.properties file and change the values as shown: spring.datasource.username=jbpm spring.datasource.password=jbpm spring.datasource.url=jdbc:mysql://localhost:3306/jbpm spring.datasource.driver-class-name=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect To configure your business application to use a PostgreSQL database, locate the following parameters in the application.properties file and change the values as shown: spring.datasource.username=jbpm spring.datasource.password=jbpm spring.datasource.url=jdbc:postgresql://localhost:5432/jbpm spring.datasource.driver-class-name=org.postgresql.xa.PGXADataSource spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect Note To create a PostgreSQL schema that uses the bytea column type instead of the oid column type, set the value of the org.kie.persistence.postgresql.useBytea property to true : Save the application.properties file. 5.9. Configuring business applications for JPA The Java Persistence API (JPA) is a standard technology that enables you to map objects to relational databases. You must configure JPA for your Red Hat Process Automation Manager business application. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the application.properties file in a text editor. Locate the following parameters in the application.properties file and verify that they have the values shown: spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.H2Dialect spring.jpa.properties.hibernate.show_sql=false spring.jpa.properties.hibernate.hbm2ddl.auto=update spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl If your business application has business automation capabilities, you can add entities to the entity manager factory by adding a comma-separated list of packages: spring.jpa.properties.entity-scan-packages=org.jbpm.springboot.samples.entities Business applications with business automation capabilities create an entity manager factory based on the persistence.xml file that comes with Red Hat Process Automation Manager. All entities found in the org.jbpm.springboot.samples.entities package are automatically added to the entity manager factory and used the same as any other JPA entity in the application. Additional resources For more information about configuring JPA, see the Spring Boot Reference Guide . 5.10. Configuring pluggable variable persistence You can provide an arbitrary entity manager for configured process variable persistence in your Red Hat Process Automation Manager Spring Boot application. To do this, add named beans during the object marshalling strategy resolution. This enables you to configure a second entity manager factory based on a second data source qualifier. Note that this configuration will not interfere with the primary data source. Prerequisites You have an existing Red Hat Process Automation Manager Spring Boot project. Procedure Add a customized entity manager JavaBean to your java class. The following example shows an entity manager Java Bean called auditEntityManager for a Java Persistence API (JPA) data source: @Bean(name = "auditEntityManager") @ConditionalOnMissingBean(name = "auditEntityManager") public LocalContainerEntityManagerFactoryBean entityManagerFactory(@Qualifier("jpaAuditDataSource") DataSource dataSource, JpaProperties jpaProperties) { return EntityManagerFactoryHelper.create(applicationContext, dataSource, jpaProperties, "custom-persistent-unit", "classpath:/META-INF/persistence.xml"); } The auditEntityManager becomes an implicit context parameter when the parameters are resolved during MVFLEX Expression Language (MVEL) evaluation. Add the following marshalling stragegy to the kie-deployment-descriptor.xml file: <marshalling-strategy> <resolver>mvel</resolver> <identifier>new org.drools.persistence.jpa.marshaller.JPAPlaceholderResolverStrategy(auditEntityManager) </identifier> <parameters/> </marshalling-strategy> Additional resources For more information about persistence, see the " Persisting process variables in a separate database schema in Red Hat Process Automation Manager " section in Managing and monitoring KIE Server . 5.11. Enabling Swagger documentation You can enable Swagger-based documentation for all endpoints available in the service project of your Red Hat Process Automation Manager business application. Prerequisites You have a Spring Boot business application. Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the service project pom.xml file in a text editor. Add the following dependencies to the service project pom.xml file and save the file. <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-rs-service-description-swagger</artifactId> <version>3.2.6</version> </dependency> <dependency> <groupId>io.swagger</groupId> <artifactId>swagger-jaxrs</artifactId> <version>1.5.15</version> <exclusions> <exclusion> <groupId>javax.ws.rs</groupId> <artifactId>jsr311-api</artifactId> </exclusion> </exclusions> </dependency> To enable the Swagger UI (optional), add the following dependency to the pom.xml file and save the file. <dependency> <groupId>org.webjars</groupId> <artifactId>swagger-ui</artifactId> <version>2.2.10</version> </dependency> Open the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources/application.properties file in a text editor. Add the following line to the application.properties file to enable Swagger support: kieserver.swagger.enabled=true After you start the business application, you can view the Swagger document at http://localhost:8090/rest/swagger.json . The complete set of endpoints is available at http://localhost:8090/rest/api-docs?url=http://localhost:8090/rest/swagger.json . | [
"server.address=<ADDRESS> server.port=<PORT> cxf.path=/rest",
"server.address=localhost server.port=8090 cxf.path=/rest",
"kieserver.serverId=<BUSINESS-APPLICATION>-service kieserver.serverName=<BUSINESS-APPLICATION>-service kieserver.location=http://localhost:8090/rest/server kieserver.controllers=http://localhost:8080/business-central/rest/controller",
"kieserver.kafka.enabled=true",
"jbpm.executor.enabled=true jbpm.executor.retries=5 jbpm.executor.interval=0 jbpm.executor.threadPoolSize=1 jbpm.executor.timeUnit=SECONDS",
"#============================================================================ Configure Main Scheduler Properties #============================================================================ org.quartz.scheduler.instanceName = SpringBootScheduler org.quartz.scheduler.instanceId = AUTO org.quartz.scheduler.skipUpdateCheck=true org.quartz.scheduler.idleWaitTime=1000 #============================================================================ Configure ThreadPool #============================================================================ org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool org.quartz.threadPool.threadCount = 5 org.quartz.threadPool.threadPriority = 5 #============================================================================ Configure JobStore #============================================================================ org.quartz.jobStore.misfireThreshold = 60000 org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT org.quartz.jobStore.driverDelegateClass=org.jbpm.process.core.timer.impl.quartz.DeploymentsAwareStdJDBCDelegate org.quartz.jobStore.useProperties=false org.quartz.jobStore.dataSource=myDS org.quartz.jobStore.nonManagedTXDataSource=notManagedDS org.quartz.jobStore.tablePrefix=QRTZ_ org.quartz.jobStore.isClustered=true org.quartz.jobStore.clusterCheckinInterval = 5000 #============================================================================ Configure Datasources #============================================================================ org.quartz.dataSource.myDS.connectionProvider.class=org.jbpm.springboot.quartz.SpringConnectionProvider org.quartz.dataSource.myDS.dataSourceName=quartzDataSource org.quartz.dataSource.notManagedDS.connectionProvider.class=org.jbpm.springboot.quartz.SpringConnectionProvider org.quartz.dataSource.notManagedDS.dataSourceName=quartzNotManagedDataSource",
"jbpm.quartz.enabled=true jbpm.quartz.configuration=quartz.properties",
"enable to use database as storage jbpm.quartz.db=true quartz.datasource.name=quartz quartz.datasource.username=sa quartz.datasource.password=sa quartz.datasource.url=jdbc:h2:./target/spring-boot-jbpm;MVCC=true quartz.datasource.driver-class-name=org.h2.Driver used to configure connection pool quartz.datasource.dbcp2.maxTotal=15 used to initialize quartz schema quartz.datasource.initialization=true spring.datasource.schema=classpath*:<QUARTZ_TABLES_H2>.sql spring.datasource.initialization-mode=always",
"@Bean(name = \"userGroupCallback\") public UserGroupCallback userGroupCallback(IdentityProvider identityProvider) throws IOException { return new MyCustomUserGroupCallback(identityProvider); }",
"@Bean(name = \"userInfo\") public UserInfo userInfo() throws IOException { return new MyCustomUserInfo(); }",
"spring.datasource.username=jbpm spring.datasource.password=jbpm spring.datasource.url=jdbc:mysql://localhost:3306/jbpm spring.datasource.driver-class-name=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect",
"spring.datasource.username=jbpm spring.datasource.password=jbpm spring.datasource.url=jdbc:postgresql://localhost:5432/jbpm spring.datasource.driver-class-name=org.postgresql.xa.PGXADataSource spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect",
"org.kie.persistence.postgresql.useBytea=true",
"spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.H2Dialect spring.jpa.properties.hibernate.show_sql=false spring.jpa.properties.hibernate.hbm2ddl.auto=update spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl",
"spring.jpa.properties.entity-scan-packages=org.jbpm.springboot.samples.entities",
"@Bean(name = \"auditEntityManager\") @ConditionalOnMissingBean(name = \"auditEntityManager\") public LocalContainerEntityManagerFactoryBean entityManagerFactory(@Qualifier(\"jpaAuditDataSource\") DataSource dataSource, JpaProperties jpaProperties) { return EntityManagerFactoryHelper.create(applicationContext, dataSource, jpaProperties, \"custom-persistent-unit\", \"classpath:/META-INF/persistence.xml\"); }",
"<marshalling-strategy> <resolver>mvel</resolver> <identifier>new org.drools.persistence.jpa.marshaller.JPAPlaceholderResolverStrategy(auditEntityManager) </identifier> <parameters/> </marshalling-strategy>",
"<dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-rs-service-description-swagger</artifactId> <version>3.2.6</version> </dependency> <dependency> <groupId>io.swagger</groupId> <artifactId>swagger-jaxrs</artifactId> <version>1.5.15</version> <exclusions> <exclusion> <groupId>javax.ws.rs</groupId> <artifactId>jsr311-api</artifactId> </exclusion> </exclusions> </dependency>",
"<dependency> <groupId>org.webjars</groupId> <artifactId>swagger-ui</artifactId> <version>2.2.10</version> </dependency>",
"kieserver.swagger.enabled=true"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/bus-app-configure-con_business-applications |
4.9. Encryption | 4.9. Encryption 4.9.1. Using LUKS Disk Encryption Linux Unified Key Setup-on-disk-format (or LUKS) allows you to encrypt partitions on your Linux computer. This is particularly important when it comes to mobile computers and removable media. LUKS allows multiple user keys to decrypt a master key, which is used for the bulk encryption of the partition. Overview of LUKS What LUKS does LUKS encrypts entire block devices and is therefore well-suited for protecting the contents of mobile devices such as removable storage media or laptop disk drives. The underlying contents of the encrypted block device are arbitrary. This makes it useful for encrypting swap devices. This can also be useful with certain databases that use specially formatted block devices for data storage. LUKS uses the existing device mapper kernel subsystem. LUKS provides passphrase strengthening which protects against dictionary attacks. LUKS devices contain multiple key slots, allowing users to add backup keys or passphrases. What LUKS does not do: LUKS is not well-suited for scenarios requiring many (more than eight) users to have distinct access keys to the same device. LUKS is not well-suited for applications requiring file-level encryption. Important Disk-encryption solutions like LUKS only protect the data when your system is off. Once the system is on and LUKS has decrypted the disk, the files on that disk are available to anyone who would normally have access to them. 4.9.1.1. LUKS Implementation in Red Hat Enterprise Linux Red Hat Enterprise Linux 7 utilizes LUKS to perform file system encryption. By default, the option to encrypt the file system is unchecked during the installation. If you select the option to encrypt your hard drive, you will be prompted for a passphrase that will be asked every time you boot the computer. This passphrase "unlocks" the bulk encryption key that is used to decrypt your partition. If you choose to modify the default partition table you can choose which partitions you want to encrypt. This is set in the partition table settings. The default cipher used for LUKS (see cryptsetup --help ) is aes-cbc-essiv:sha256 (ESSIV - Encrypted Salt-Sector Initialization Vector). Note that the installation program, Anaconda , uses by default XTS mode (aes-xts-plain64). The default key size for LUKS is 256 bits. The default key size for LUKS with Anaconda (XTS mode) is 512 bits. Ciphers that are available are: AES - Advanced Encryption Standard - FIPS PUB 197 Twofish (a 128-bit block cipher) Serpent cast5 - RFC 2144 cast6 - RFC 2612 4.9.1.2. Manually Encrypting Directories Warning Following this procedure will remove all data on the partition that you are encrypting. You WILL lose all your information! Make sure you backup your data to an external source before beginning this procedure! Enter runlevel 1 by typing the following at a shell prompt as root: Unmount your existing /home : If the command in the step fails, use fuser to find processes hogging /home and kill them: Verify /home is no longer mounted: Fill your partition with random data: This command proceeds at the sequential write speed of your device and may take some time to complete. It is an important step to ensure no unencrypted data is left on a used device, and to obfuscate the parts of the device that contain encrypted data as opposed to just random data. Initialize your partition: Open the newly encrypted device: Make sure the device is present: Create a file system: Mount the file system: Make sure the file system is visible: Add the following to the /etc/crypttab file: Edit the /etc/fstab file, removing the old entry for /home and adding the following line: Restore default SELinux security contexts: Reboot the machine: The entry in the /etc/crypttab makes your computer ask your luks passphrase on boot. Log in as root and restore your backup. You now have an encrypted partition for all of your data to safely rest while the computer is off. 4.9.1.3. Add a New Passphrase to an Existing Device Use the following command to add a new passphrase to an existing device: After being prompted for any one of the existing passprases for authentication, you will be prompted to enter the new passphrase. 4.9.1.4. Remove a Passphrase from an Existing Device Use the following command to remove a passphrase from an existing device: You will be prompted for the passphrase you want to remove and then for any one of the remaining passphrases for authentication. 4.9.1.5. Creating Encrypted Block Devices in Anaconda You can create encrypted devices during system installation. This allows you to easily configure a system with encrypted partitions. To enable block device encryption, check the Encrypt System check box when selecting automatic partitioning or the Encrypt check box when creating an individual partition, software RAID array, or logical volume. After you finish partitioning, you will be prompted for an encryption passphrase. This passphrase will be required to access the encrypted devices. If you have pre-existing LUKS devices and provided correct passphrases for them earlier in the install process the passphrase entry dialog will also contain a check box. Checking this check box indicates that you would like the new passphrase to be added to an available slot in each of the pre-existing encrypted block devices. Note Checking the Encrypt System check box on the Automatic Partitioning screen and then choosing Create custom layout does not cause any block devices to be encrypted automatically. Note You can use kickstart to set a separate passphrase for each new encrypted block device. 4.9.1.6. Additional Resources For additional information on LUKS or encrypting hard drives under Red Hat Enterprise Linux 7 visit one of the following links: LUKS home page LUKS/cryptsetup FAQ LUKS - Linux Unified Key Setup Wikipedia article HOWTO: Creating an encrypted Physical Volume (PV) using a second hard drive and pvmove 4.9.2. Creating GPG Keys GPG is used to identify yourself and authenticate your communications, including those with people you do not know. GPG allows anyone reading a GPG-signed email to verify its authenticity. In other words, GPG allows someone to be reasonably certain that communications signed by you actually are from you. GPG is useful because it helps prevent third parties from altering code or intercepting conversations and altering the message. 4.9.2.1. Creating GPG Keys in GNOME To create a GPG Key in GNOME , follow these steps: Install the Seahorse utility, which makes GPG key management easier: To create a key, from the Applications Accessories menu select Passwords and Encryption Keys , which starts the application Seahorse . From the File menu select New and then PGP Key . Then click Continue . Type your full name, email address, and an optional comment describing who you are (for example: John C. Smith, [email protected] , Software Engineer). Click Create . A dialog is displayed asking for a passphrase for the key. Choose a strong passphrase but also easy to remember. Click OK and the key is created. Warning If you forget your passphrase, you will not be able to decrypt the data. To find your GPG key ID, look in the Key ID column to the newly created key. In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . You should make a backup of your private key and store it somewhere secure. 4.9.2.2. Creating GPG Keys in KDE To create a GPG Key in KDE , follow these steps: Start the KGpg program from the main menu by selecting Applications Utilities Encryption Tool . If you have never used KGpg before, the program walks you through the process of creating your own GPG keypair. A dialog box appears prompting you to create a new key pair. Enter your name, email address, and an optional comment. You can also choose an expiration time for your key, as well as the key strength (number of bits) and algorithms. Enter your passphrase in the dialog box. At this point, your key appears in the main KGpg window. Warning If you forget your passphrase, you will not be able to decrypt the data. To find your GPG key ID, look in the Key ID column to the newly created key. In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . You should make a backup of your private key and store it somewhere secure. 4.9.2.3. Creating GPG Keys Using the Command Line Use the following shell command: This command generates a key pair that consists of a public and a private key. Other people use your public key to authenticate and decrypt your communications. Distribute your public key as widely as possible, especially to people who you know will want to receive authentic communications from you, such as a mailing list. A series of prompts directs you through the process. Press the Enter key to assign a default value if desired. The first prompt asks you to select what kind of key you prefer: In almost all cases, the default is the correct choice. An RSA/RSA key allows you not only to sign communications, but also to encrypt files. Choose the key size: Again, the default, 2048, is sufficient for almost all users, and represents an extremely strong level of security. Choose when the key will expire. It is a good idea to choose an expiration date instead of using the default, which is none . If, for example, the email address on the key becomes invalid, an expiration date will remind others to stop using that public key. Entering a value of 1y , for example, makes the key valid for one year. (You may change this expiration date after the key is generated, if you change your mind.) Before the gpg2 application asks for signature information, the following prompt appears: Enter y to finish the process. Enter your name and email address for your GPG key. Remember this process is about authenticating you as a real individual. For this reason, include your real name. If you choose a bogus email address, it will be more difficult for others to find your public key. This makes authenticating your communications difficult. If you are using this GPG key for self-introduction on a mailing list, for example, enter the email address you use on that list. Use the comment field to include aliases or other information. (Some people use different keys for different purposes and identify each key with a comment, such as "Office" or "Open Source Projects.") At the confirmation prompt, enter the letter O to continue if all entries are correct, or use the other options to fix any problems. Finally, enter a passphrase for your secret key. The gpg2 program asks you to enter your passphrase twice to ensure you made no typing errors. Finally, gpg2 generates random data to make your key as unique as possible. Move your mouse, type random keys, or perform other tasks on the system during this step to speed up the process. Once this step is finished, your keys are complete and ready to use: The key fingerprint is a shorthand "signature" for your key. It allows you to confirm to others that they have received your actual public key without any tampering. You do not need to write this fingerprint down. To display the fingerprint at any time, use this command, substituting your email address: Your "GPG key ID" consists of 8 hex digits identifying the public key. In the example above, the GPG key ID is 1B2AFA1C . In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . Warning If you forget your passphrase, the key cannot be used and any data encrypted using that key will be lost. 4.9.2.4. About Public Key Encryption Wikipedia - Public Key Cryptography HowStuffWorks - Encryption 4.9.3. Using openCryptoki for Public-Key Cryptography openCryptoki is a Linux implementation of PKCS#11 , which is a Public-Key Cryptography Standard that defines an application programming interface ( API ) to cryptographic devices called tokens. Tokens may be implemented in hardware or software. This chapter provides an overview of the way the openCryptoki system is installed, configured, and used in Red Hat Enterprise Linux 7. 4.9.3.1. Installing openCryptoki and Starting the Service To install the basic openCryptoki packages on your system, including a software implementation of a token for testing purposes, enter the following command as root : Depending on the type of hardware tokens you intend to use, you may need to install additional packages that provide support for your specific use case. For example, to obtain support for Trusted Platform Module ( TPM ) devices, you need to install the opencryptoki-tpmtok package. See the Installing Packages section of the Red Hat Enterprise Linux 7 System Administrator's Guide for general information on how to install packages using the Yum package manager. To enable the openCryptoki service, you need to run the pkcsslotd daemon. Start the daemon for the current session by executing the following command as root : To ensure that the service is automatically started at boot time, enter the following command: See the Managing Services with systemd chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for more information on how to use systemd targets to manage services. 4.9.3.2. Configuring and Using openCryptoki When started, the pkcsslotd daemon reads the /etc/opencryptoki/opencryptoki.conf configuration file, which it uses to collect information about the tokens configured to work with the system and about their slots. The file defines the individual slots using key-value pairs. Each slot definition can contain a description, a specification of the token library to be used, and an ID of the slot's manufacturer. Optionally, the version of the slot's hardware and firmware may be defined. See the opencryptoki.conf (5) manual page for a description of the file's format and for a more detailed description of the individual keys and the values that can be assigned to them. To modify the behavior of the pkcsslotd daemon at run time, use the pkcsconf utility. This tool allows you to show and configure the state of the daemon, as well as to list and modify the currently configured slots and tokens. For example, to display information about tokens, issue the following command (note that all non-root users that need to communicate with the pkcsslotd daemon must be a part of the pkcs11 system group): See the pkcsconf (1) manual page for a list of arguments available with the pkcsconf tool. Warning Keep in mind that only fully trusted users should be assigned membership in the pkcs11 group, as all members of this group have the right to block other users of the openCryptoki service from accessing configured PKCS#11 tokens. All members of this group can also execute arbitrary code with the privileges of any other users of openCryptoki . 4.9.4. Using Smart Cards to Supply Credentials to OpenSSH The smart card is a lightweight hardware security module in a USB stick, MicroSD, or SmartCard form factor. It provides a remotely manageable secure key store. In Red Hat Enterprise Linux 7, OpenSSH supports authentication using smart cards. To use your smart card with OpenSSH, store the public key from the card to the ~/.ssh/authorized_keys file. Install the PKCS#11 library provided by the opensc package on the client. PKCS#11 is a Public-Key Cryptography Standard that defines an application programming interface (API) to cryptographic devices called tokens. Enter the following command as root : 4.9.4.1. Retrieving a Public Key from a Card To list the keys on your card, use the ssh-keygen command. Specify the shared library (OpenSC in the following example) with the -D directive. 4.9.4.2. Storing a Public Key on a Server To enable authentication using a smart card on a remote server, transfer the public key to the remote server. Do it by copying the retrieved string (key) and pasting it to the remote shell, or by storing your key to a file ( smartcard.pub in the following example) and using the ssh-copy-id command: Storing a public key without a private key file requires to use the SSH_COPY_ID_LEGACY=1 environment variable or the -f option. 4.9.4.3. Authenticating to a Server with a Key on a Smart Card OpenSSH can read your public key from a smart card and perform operations with your private key without exposing the key itself. This means that the private key does not leave the card. To connect to a remote server using your smart card for authentication, enter the following command and enter the PIN protecting your card: Replace the hostname with the actual host name to which you want to connect. To save unnecessary typing time you connect to the remote server, store the path to the PKCS#11 library in your ~/.ssh/config file: Connect by running the ssh command without any additional options: 4.9.4.4. Using ssh-agent to Automate PIN Logging In Set up environmental variables to start using ssh-agent . You can skip this step in most cases because ssh-agent is already running in a typical session. Use the following command to check whether you can connect to your authentication agent: To avoid writing your PIN every time you connect using this key, add the card to the agent by running the following command: To remove the card from ssh-agent , use the following command: Note FIPS 201-2 requires explicit user action by the Personal Identity Verification (PIV) cardholder as a condition for use of the digital signature key stored on the card. OpenSC correctly enforces this requirement. However, for some applications it is impractical to require the cardholder to enter the PIN for each signature. To cache the smart card PIN, remove the # character before the pin_cache_ignore_user_consent = true; option in the /etc/opensc-x86_64.conf . See the Cardholder Authentication for the PIV Digital Signature Key (NISTIR 7863) report for more information. 4.9.4.5. Additional Resources Setting up your hardware or software token is described in the Smart Card support in Red Hat Enterprise Linux 7 article. For more information about the pkcs11-tool utility for managing and using smart cards and similar PKCS#11 security tokens, see the pkcs11-tool(1) man page. 4.9.5. Trusted and Encrypted Keys Trusted and encrypted keys are variable-length symmetric keys generated by the kernel that utilize the kernel keyring service. The fact that the keys never appear in user space in an unencrypted form means that their integrity can be verified, which in turn means that they can be used, for example, by the extended verification module ( EVM ) to verify and confirm the integrity of a running system. User-level programs can only ever access the keys in the form of encrypted blobs . Trusted keys need a hardware component: the Trusted Platform Module ( TPM ) chip, which is used to both create and encrypt ( seal ) the keys. The TPM seals the keys using a 2048-bit RSA key called the storage root key ( SRK ). In addition to that, trusted keys may also be sealed using a specific set of the TPM 's platform configuration register ( PCR ) values. The PCR contains a set of integrity-management values that reflect the BIOS , boot loader, and operating system. This means that PCR -sealed keys can only be decrypted by the TPM on the exact same system on which they were encrypted. However, once a PCR -sealed trusted key is loaded (added to a keyring), and thus its associated PCR values are verified, it can be updated with new (or future) PCR values, so that a new kernel, for example, can be booted. A single key can also be saved as multiple blobs, each with different PCR values. Encrypted keys do not require a TPM , as they use the kernel AES encryption, which makes them faster than trusted keys. Encrypted keys are created using kernel-generated random numbers and encrypted by a master key when they are exported into user-space blobs. This master key can be either a trusted key or a user key, which is their main disadvantage - if the master key is not a trusted key, the encrypted key is only as secure as the user key used to encrypt it. 4.9.5.1. Working with keys Before performing any operations with the keys, ensure that the trusted and encrypted-keys kernel modules are loaded in the system. Consider the following points while loading the kernel modules in different RHEL kernel architectures: For RHEL kernels with the x86_64 architecture, the TRUSTED_KEYS and ENCRYPTED_KEYS code is built in as a part of the core kernel code. As a result, the x86_64 system users can use these keys without loading the trusted and encrypted-keys modules. For all other architectures, it is necessary to load the trusted and encrypted-keys kernel modules before performing any operations with the keys. To load the kernel modules, execute the following command: The trusted and encrypted keys can be created, loaded, exported, and updated using the keyctl utility. For detailed information about using keyctl , see keyctl (1) . Note In order to use a TPM (such as for creating and sealing trusted keys), it needs to be enabled and active. This can be usually achieved through a setting in the machine's BIOS or using the tpm_setactive command from the tpm-tools package of utilities. Also, the TrouSers application needs to be installed (the trousers package), and the tcsd daemon, which is a part of the TrouSers suite, running to communicate with the TPM . To create a trusted key using a TPM , execute the keyctl command with the following syntax: ~]USD keyctl add trusted name "new keylength [ options ]" keyring Using the above syntax, an example command can be constructed as follows: The above example creates a trusted key called kmk with the length of 32 bytes (256 bits) and places it in the user keyring ( @u ). The keys may have a length of 32 to 128 bytes (256 to 1024 bits). Use the show subcommand to list the current structure of the kernel keyrings: The print subcommand outputs the encrypted key to the standard output. To export the key to a user-space blob, use the pipe subcommand as follows: To load the trusted key from the user-space blob, use the add command again with the blob as an argument: The TPM -sealed trusted key can then be employed to create secure encrypted keys. The following command syntax is used for generating encrypted keys: ~]USD keyctl add encrypted name "new [ format ] key-type : master-key-name keylength " keyring Based on the above syntax, a command for generating an encrypted key using the already created trusted key can be constructed as follows: To create an encrypted key on systems where a TPM is not available, use a random sequence of numbers to generate a user key, which is then used to seal the actual encrypted keys. Then generate the encrypted key using the random-number user key: The list subcommand can be used to list all keys in the specified kernel keyring: Important Keep in mind that encrypted keys that are not sealed by a master trusted key are only as secure as the user master key (random-number key) used to encrypt them. Therefore, the master user key should be loaded as securely as possible and preferably early during the boot process. 4.9.5.2. Additional Resources The following offline and online resources can be used to acquire additional information pertaining to the use of trusted and encrypted keys. Installed Documentation keyctl (1) - Describes the use of the keyctl utility and its subcommands. Online Documentation Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services, such as the Apache HTTP Server . https://www.kernel.org/doc/Documentation/security/keys-trusted-encrypted.txt - The official documentation about the trusted and encrypted keys feature of the Linux kernel. See Also Section A.1.1, "Advanced Encryption Standard - AES" provides a concise description of the Advanced Encryption Standard . Section A.2, "Public-key Encryption" describes the public-key cryptographic approach and the various cryptographic protocols it uses. 4.9.6. Using the Random Number Generator In order to be able to generate secure cryptographic keys that cannot be easily broken, a source of random numbers is required. Generally, the more random the numbers are, the better the chance of obtaining unique keys. Entropy for generating random numbers is usually obtained from computing environmental "noise" or using a hardware random number generator . The rngd daemon, which is a part of the rng-tools package, is capable of using both environmental noise and hardware random number generators for extracting entropy. The daemon checks whether the data supplied by the source of randomness is sufficiently random and then stores it in the random-number entropy pool of the kernel. The random numbers it generates are made available through the /dev/random and /dev/urandom character devices. The difference between /dev/random and /dev/urandom is that the former is a blocking device, which means it stops supplying numbers when it determines that the amount of entropy is insufficient for generating a properly random output. Conversely, /dev/urandom is a non-blocking source, which reuses the entropy pool of the kernel and is thus able to provide an unlimited supply of pseudo-random numbers, albeit with less entropy. As such, /dev/urandom should not be used for creating long-term cryptographic keys. To install the rng-tools package, issue the following command as the root user: To start the rngd daemon, execute the following command as root : To query the status of the daemon, use the following command: To start the rngd daemon with optional parameters, execute it directly. For example, to specify an alternative source of random-number input (other than /dev/hwrandom ), use the following command: The command starts the rngd daemon with /dev/hwrng as the device from which random numbers are read. Similarly, you can use the -o (or --random-device ) option to choose the kernel device for random-number output (other than the default /dev/random ). See the rngd (8) manual page for a list of all available options. To check which sources of entropy are available in a given system, execute the following command as root : Note After entering the rngd -v command, the according process continues running in background. The -b, --background option (become a daemon) is applied by default. If there is not any TPM device present, you will see only the Intel Digital Random Number Generator (DRNG) as a source of entropy. To check if your CPU supports the RDRAND processor instruction, enter the following command: Note For more information and software code examples, see Intel Digital Random Number Generator (DRNG) Software Implementation Guide. The rng-tools package also contains the rngtest utility, which can be used to check the randomness of data. To test the level of randomness of the output of /dev/random , use the rngtest tool as follows: A high number of failures shown in the output of the rngtest tool indicates that the randomness of the tested data is insufficient and should not be relied upon. See the rngtest (1) manual page for a list of options available for the rngtest utility. Red Hat Enterprise Linux 7 introduced the virtio RNG (Random Number Generator) device that provides KVM virtual machines with access to entropy from the host machine. With the recommended setup, hwrng feeds into the entropy pool of the host Linux kernel (through /dev/random ), and QEMU will use /dev/random as the source for entropy requested by guests. Figure 4.1. The virtio RNG device Previously, Red Hat Enterprise Linux 7.0 and Red Hat Enterprise Linux 6 guests could make use of the entropy from hosts through the rngd user space daemon. Setting up the daemon was a manual step for each Red Hat Enterprise Linux installation. With Red Hat Enterprise Linux 7.1, the manual step has been eliminated, making the entire process seamless and automatic. The use of rngd is now not required and the guest kernel itself fetches entropy from the host when the available entropy falls below a specific threshold. The guest kernel is then in a position to make random numbers available to applications as soon as they request them. The Red Hat Enterprise Linux installer, Anaconda , now provides the virtio-rng module in its installer image, making available host entropy during the Red Hat Enterprise Linux installation. Important To correctly decide which random number generator you should use in your scenario, see the Understanding the Red Hat Enterprise Linux random number generator interface article. | [
"telinit 1",
"umount /home",
"fuser -mvk /home",
"grep home /proc/mounts",
"shred -v --iterations=1 /dev/VG00/LV_home",
"cryptsetup --verbose --verify-passphrase luksFormat /dev/VG00/LV_home",
"cryptsetup luksOpen /dev/VG00/LV_home home",
"ls -l /dev/mapper | grep home",
"mkfs.ext3 /dev/mapper/home",
"mount /dev/mapper/home /home",
"df -h | grep home",
"home /dev/VG00/LV_home none",
"/dev/mapper/home /home ext3 defaults 1 2",
"/sbin/restorecon -v -R /home",
"shutdown -r now",
"cryptsetup luksAddKey device",
"cryptsetup luksRemoveKey device",
"~]# yum install seahorse",
"~]USD gpg2 --gen-key",
"Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection?",
"RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048)",
"Please specify how long the key should be valid. 0 = key does not expire d = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years key is valid for? (0)",
"Is this correct (y/N)?",
"pub 1024D/1B2AFA1C 2005-03-31 John Q. Doe <[email protected]> Key fingerprint = 117C FE83 22EA B843 3E86 6486 4320 545E 1B2A FA1C sub 1024g/CEA4B22E 2005-03-31 [expires: 2006-03-31]",
"~]USD gpg2 --fingerprint [email protected]",
"~]# yum install opencryptoki",
"~]# systemctl start pkcsslotd",
"~]# systemctl enable pkcsslotd",
"~]USD pkcsconf -t",
"~]# yum install opensc",
"~]USD ssh-keygen -D /usr/lib64/pkcs11/opensc-pkcs11.so ssh-rsa AAAAB3NzaC1yc[...]+g4Mb9",
"~]USD ssh-copy-id -f -i smartcard.pub user@hostname user@hostname's password: Number of key(s) added: 1 Now try logging into the machine, with: \"ssh user@hostname\" and check to make sure that only the key(s) you wanted were added.",
"[localhost ~]USD ssh -I /usr/lib64/pkcs11/opensc-pkcs11.so hostname Enter PIN for 'Test (UserPIN)': [hostname ~]USD",
"Host hostname PKCS11Provider /usr/lib64/pkcs11/opensc-pkcs11.so",
"[localhost ~]USD ssh hostname Enter PIN for 'Test (UserPIN)': [hostname ~]USD",
"~]USD ssh-add -l Could not open a connection to your authentication agent. ~]USD eval `ssh-agent`",
"~]USD ssh-add -s /usr/lib64/pkcs11/opensc-pkcs11.so Enter PIN for 'Test (UserPIN)': Card added: /usr/lib64/pkcs11/opensc-pkcs11.so",
"~]USD ssh-add -e /usr/lib64/pkcs11/opensc-pkcs11.so Card removed: /usr/lib64/pkcs11/opensc-pkcs11.so",
"~]# modprobe trusted encrypted-keys",
"~]USD keyctl add trusted kmk \"new 32\" @u 642500861",
"~]USD keyctl show Session Keyring -3 --alswrv 500 500 keyring: _ses 97833714 --alswrv 500 -1 \\_ keyring: _uid.1000 642500861 --alswrv 500 500 \\_ trusted: kmk",
"~]USD keyctl pipe 642500861 > kmk.blob",
"~]USD keyctl add trusted kmk \"load `cat kmk.blob`\" @u 268728824",
"~]USD keyctl add encrypted encr-key \"new trusted:kmk 32\" @u 159771175",
"~]USD keyctl add user kmk-user \"`dd if=/dev/urandom bs=1 count=32 2>/dev/null`\" @u 427069434",
"~]USD keyctl add encrypted encr-key \"new user:kmk-user 32\" @u 1012412758",
"~]USD keyctl list @u 2 keys in keyring: 427069434: --alswrv 1000 1000 user: kmk-user 1012412758: --alswrv 1000 1000 encrypted: encr-key",
"~]# yum install rng-tools",
"~]# systemctl start rngd",
"~]# systemctl status rngd",
"~]# rngd --rng-device= /dev/hwrng",
"~]# rngd -vf Unable to open file: /dev/tpm0 Available entropy sources: DRNG",
"~]USD cat /proc/cpuinfo | grep rdrand",
"~]USD cat /dev/random | rngtest -c 1000 rngtest 5 Copyright (c) 2004 by Henrique de Moraes Holschuh This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. rngtest: starting FIPS tests rngtest: bits received from input: 20000032 rngtest: FIPS 140-2 successes: 998 rngtest: FIPS 140-2 failures: 2 rngtest: FIPS 140-2(2001-10-10) Monobit: 0 rngtest: FIPS 140-2(2001-10-10) Poker: 0 rngtest: FIPS 140-2(2001-10-10) Runs: 0 rngtest: FIPS 140-2(2001-10-10) Long run: 2 rngtest: FIPS 140-2(2001-10-10) Continuous run: 0 rngtest: input channel speed: (min=1.171; avg=8.453; max=11.374)Mibits/s rngtest: FIPS tests speed: (min=15.545; avg=143.126; max=157.632)Mibits/s rngtest: Program run time: 2390520 microseconds"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Encryption |
Chapter 37. Securing passwords with a keystore | Chapter 37. Securing passwords with a keystore You can use a keystore to encrypt passwords that are used for communication between Business Central and KIE Server. You should encrypt both controller and KIE Server passwords. If Business Central and KIE Server are deployed to different application servers, then both application servers should use the keystore. Use Java Cryptography Extension KeyStore (JCEKS) for your keystore because it supports symmetric keys. Use KeyTool, which is part of the JDK installation, to create a new JCEKS. Note If KIE Server is not configured with JCEKS, KIE Server passwords are stored in system properties in plain text form. Prerequisites KIE Server is installed in Red Hat JBoss Web Server. Java 8 or higher is installed. Procedure Open the JWS_HOME /tomcat/conf/tomcat-users.xml file in a text editor. Add a KIE Server user with the kie-server role to the JWS_HOME /tomcat/conf/tomcat-users.xml file. In the following example, replace <USER_NAME> and <PASSWORD> with the user name and password of your choice. To use KeyTool to create a JCEKS, enter the following command in the Java 8 home directory: USD<JAVA_HOME>/bin/keytool -importpassword -keystore <KEYSTORE_PATH> -keypass <ALIAS_KEY_PASSWORD> -alias <PASSWORD_ALIAS> -storepass <KEYSTORE_PASSWORD> -storetype JCEKS In this example, replace the following variables: <KEYSTORE_PATH> : The path where the keystore will be stored <KEYSTORE_PASSWORD> : The keystore password <ALIAS_KEY_PASSWORD> : The password used to access values stored with the alias <PASSWORD_ALIAS> : The alias of the entry to the process When prompted, enter the password for the KIE Server user that you created. To set the system properties, complete one of these steps in the JWS_HOME /tomcat/bin directory and replace the variables as described in the following table: Note If Business Central or the standalone controller are installed in separate instances from Red Hat JBoss Web Server, do not add the kie.keystore.key.server.alias and kie.keystore.key.server.pwd properties to CATALINA_OPTS . On Linux or UNIX, create the setenv.sh file with the following content: On Windows, add the following content to the setenv.bat file: Table 37.1. System properties used to load a KIE Server JCEKS System property Placeholder Description kie.keystore.keyStoreURL <KEYSTORE_URL> URL for the JCEKS that you want to use, for example file:///home/kie/keystores/keystore.jceks kie.keystore.keyStorePwd <KEYSTORE_PWD> Password for the JCEKS kie.keystore.key.server.alias <KEY_SERVER_ALIAS> Alias of the key for REST services where the password is stored kie.keystore.key.server.pwd <KEY_SERVER_PWD> Password of the alias for REST services with the stored password kie.keystore.key.ctrl.alias <KEY_CONTROL_ALIAS> Alias of the key for default REST Process Automation Controller where the password is stored kie.keystore.key.ctrl.pwd <KEY_CONTROL_PWD> Password of the alias for default REST Process Automation Controller with the stored password Start KIE Server to verify the configuration. | [
"<role rolename=\"kie-server\"/> <user username=\"<USER_NAME>\" password=\"<PASSWORD>\" roles=\"kie-server\"/>",
"USD<JAVA_HOME>/bin/keytool -importpassword -keystore <KEYSTORE_PATH> -keypass <ALIAS_KEY_PASSWORD> -alias <PASSWORD_ALIAS> -storepass <KEYSTORE_PASSWORD> -storetype JCEKS",
"set CATALINA_OPTS=\" -Dkie.keystore.keyStoreURL=<KEYSTORE_URL> -Dkie.keystore.keyStorePwd=<KEYSTORE_PWD> -Dkie.keystore.key.server.alias=<KEY_SERVER_ALIAS> -Dkie.keystore.key.server.pwd=<KEY_SERVER_PWD> -Dkie.keystore.key.ctrl.alias=<KEY_CONTROL_ALIAS> -Dkie.keystore.key.ctrl.pwd=<KEY_CONTROL_PWD>",
"set CATALINA_OPTS=\" -Dkie.keystore.keyStoreURL=<KEYSTORE_URL> -Dkie.keystore.keyStorePwd=<KEYSTORE_PWD> -Dkie.keystore.key.server.alias=<KEY_SERVER_ALIAS> -Dkie.keystore.key.server.pwd=<KEY_SERVER_PWD> -Dkie.keystore.key.ctrl.alias=<KEY_CONTROL_ALIAS> -Dkie.keystore.key.ctrl.pwd=<KEY_CONTROL_PWD>"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/securing-passwords-jws-proc_install-on-jws |
function::task_stime | function::task_stime Name function::task_stime - System time of the current task Synopsis Arguments None Description Returns the system time of the current task in cputime. Does not include any time used by other tasks in this process, nor does it include any time of the children of this task. | [
"function task_stime:long()"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-stime |
Chapter 3. User tasks | Chapter 3. User tasks 3.1. Creating applications from installed Operators This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. 3.1.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.12 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The screen allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click on the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.2. Installing Operators in your namespace If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner. 3.2.1. Prerequisites A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details. 3.2.2. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 3.2.3. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 3.2.4. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces mode, the openshift-operators namespace already has the appropriate global-operators Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. You can only have one Operator group per namespace. For more information, see "Operator groups". Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources Operator groups Channel names 3.2.5. Installing a specific version of an Operator You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions OpenShift CLI ( oc ) installed Procedure Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.4.0: Subscription with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. Create the Subscription object: USD oc apply -f sub.yaml Manually approve the pending install plan to complete the Operator installation. Additional resources Manually approving a pending Operator update | [
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2",
"oc apply -f sub.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operators/user-tasks |
Chapter 7. Securing Programs Using Sandbox | Chapter 7. Securing Programs Using Sandbox The sandbox security utility adds a set of SELinux policies that allow a system administrator to run an application within a tightly confined SELinux domain. Restrictions on permission to open new files or access to the network can be defined. This enables testing the processing characteristics of untrusted software securely, without risking damage to the system. 7.1. Running an Application Using Sandbox Before using the sandbox utility, the policycoreutils-sandbox package must be installed: The basic syntax to confine an application is: To run a graphical application in a sandbox , use the -X option. For example: The -X tells sandbox to set up a confined secondary X Server for the application (in this case, evince ), before copying the needed resources and creating a closed virtual environment in the user's home directory or in the /tmp directory. To preserve data from one session to the : Note that sandbox/home is used for /home and sandbox/tmp is used for /tmp . Different applications are placed in different restricted environments. The application runs in full-screen mode and this prevents access to other functions. As mentioned before, you cannot open or create files except those which are labeled as sandbox_x_file_t . Access to the network is also initially impossible inside the sandbox . To allow access, use the sandbox_web_t label. For example, to launch Firefox : Warning The sandbox_net_t label allows unrestricted, bi-directional network access to all network ports. The sandbox_web_t allows connections to ports required for web browsing only. Use of sandbox_net_t should made with caution and only when required. See the sandbox (8) manual page for information, and a full list of available options. | [
"~]# yum install policycoreutils-sandbox",
"~]USD sandbox [options] application_under_test",
"~]USD sandbox -X evince",
"~]USD sandbox -H sandbox/home -T sandbox/tmp -X firefox",
"~]USD sandbox ‐X ‐t sandbox_web_t firefox"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-securing_programs_using_sandbox |
Chapter 7. Deploying storage at the edge | Chapter 7. Deploying storage at the edge You can leverage Red Hat OpenStack Platform director to extend distributed compute node deployments to include distributed image management and persistent storage at the edge with the benefits of using Red Hat OpenStack Platform and Ceph Storage. 7.1. Deploying edge sites with storage After you deploy the central site, build out the edge sites and ensure that each edge location connects primarily to its own storage backend, as well as to the storage back end at the central location. A spine and leaf networking configuration should be included with this configuration, with the addition of the storage and storage_mgmt networks that ceph needs. For more information see Spine leaf networking . You must have connectivity between the storage network at the central location and the storage network at each edge site so that you can move glance images between sites. Ensure that the central location can communicate with the mons and osds at each of the edge sites. However, you should terminate the storage management network at site location boundaries, because the storage management network is used for OSD rebalancing. Procedure Export stack information from the central stack. You must deploy the central stack before running this command: Note The config-download-dir value defaults to /var/lib/mistral/<stack>/ . Create the central_ceph_external.yaml file. This environment file connects DCN sites to the central hub Ceph cluster, so the information is specific to the Ceph cluster deployed in the steps. When Ceph is deployed without Red Hat OpenStack Platform director, you cannot run the openstack overcloud export ceph command. Manually create the central_ceph_external.yaml file: The fsid parameter is the file system ID of your Ceph Storage cluster: This value is specified in the cluster configuration file in the [global] section: The key parameter is the ceph client key for the openstack account: For more information about the parameters shown in the sample central_ceph_external.yaml file, see Creating a custom environment file . Create the ~/dcn0/glance.yaml file for Image service configuration overrides: Note If you do not use the GlanceRbdPoolName and CephClientUserName parameters for the glance multi-store configuration, then the values are inherited from the parameters that you used to configure the central location. These values might not be the same, and can result in a failed deployment. Configure the ceph.yaml file with configuration parameters relative to the available hardware. For more information, see Mapping the Ceph Storage node disk layout . Implement system tuning by using a file that contains the following parameters tuned to the requirements of you environment: For more information about setting the values for the parameters CephAnsibleExtraConfig , see Setting ceph-ansible group variables . For more information about setting the values for the parameters CephConfigOverrides , see Customizing the Ceph Storage cluster . Configure the naming conventions for your site in the site-name.yaml environment file. The Nova availability zone and the Cinder storage availability zone must match. The CinderVolumeCluster parameter is included when deploying an edge site with storage. This parameter is used when cinder-volume is deployed as active/active, which is required at edge sites. As a best practice, set the Cinder cluster name to match the availability zone: Generate the roles.yaml file to be used for the dcn0 deployment, for example: Set the number systems in each role by creating the ~/dcn0/roles-counts.yaml file with the desired values for each role. + You must allocate three nodes to satisfy requirements for GlanceApiEdge services. Use the DistributedComputeHCICount parameter for hyperconverged infrastructure. For other architectures, use the DistributedComputeCount parameter. Retrieve the container images for the edge site: Note You must include all environment files to be used for the deployment in the openstack tripleo container image prepare command. Deploy the edge site: Note You must include heat templates for the configuration of networking in your openstack overcloud deploy command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details. You must ensure that nova cell_v2 host mappings are created in the nova API database after the edge locations are deployed. Run the following command on the undercloud: If you scale up an edge site, you must run this command again. 7.2. Deploying edge sites with dedicated Ceph nodes You can deploy dedicated Ceph nodes using Red Hat OpenStack Platform director. Procedure Export stack information from the central stack. You must deploy the central stack before running this command: Note The config-download-dir value defaults to /var/lib/mistral/<stack>/ . Create the central_ceph_external.yaml file. This environment file connects DCN sites to the central hub Ceph cluster, so the information is specific to the Ceph cluster deployed in the steps. Create the ~/dcn0/glance.yaml file for glance configuration overrides: Configure the ceph.yaml file with configuration parameters relative to the available hardware. For more information, see Mapping the Ceph Storage node disk layout . Configure the naming conventions for your site in the site-name.yaml environment file. The Nova availability zone and the Cinder storage availability zone must match. The CinderVolumeCluster parameter is included when deploying an edge site with storage. This parameter is used when cinder-volume is deployed as active/active, which is required at edge sites. As a best practice, set the Cinder cluster name to match the availability zone: Generate the roles.yaml file to be used for the dcn0 deployment, for example: Set the number systems in each role by creating the ~/dcn0/roles-counts.yaml file with the desired values for each role. You must allocate three nodes for the DistributedCompute role to satisfy requirements for GlanceApiEdge services, and three nodes for the CephAll role. Retrieve the container images for the edge site: Note You must include all environment files to be used for the deployment in the openstack tripleo container image prepare command. Deploy the edge site: Note You must include heat templates for the configuration of networking in your openstack overcloud deploy command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details. You must ensure that nova cell_v2 host mappings are created in the nova API database after the edge locations are deployed. Run the following command on the undercloud: If you scale up an edge site, you must run this command again. 7.3. Using a pre-installed Red Hat Ceph Storage cluster at the edge You can configure Red Hat OpenStack Platform to use a pre-existing Ceph cluster. This is called an external Ceph deployment. Prerequisites You must have a preinstalled Ceph cluster that is local to your DCN site so that latency requirements are not exceeded. Procedure Create the following pools in your Ceph cluster. If you are deploying at the central location, include the backups and metrics pools: Replace <_PGnum_> with the number of placement groups. You can use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value. Create the OpenStack client user in Ceph to provide the Red Hat OpenStack Platform environment access to the appropriate pools: Save the provided Ceph client key that is returned. Use this key as the value for the CephClientKey parameter when you configure the undercloud. Note If you run this command at the central location and plan to use Cinder backup or telemetry services, add allow rwx pool=backups, allow pool=metrics to the command. Save the file system ID of your Ceph Storage cluster. The value of the fsid parameter in the [global] section of your Ceph configuration file is the file system ID: Use this value as the value for the CephClusterFSID parameter when you configure the undercloud. On the undercloud, create an environment file to configure your nodes to connect to the unmanaged Ceph cluster. Use a recognizable naming convention, such as ceph-external-<SITE>.yaml where SITE is the location for your deployment, such as ceph-external-central.yaml, ceph-external-dcn1.yaml, and so on. Use the previously saved values for the CephClusterFSID and CephClientKey parameters. Use a comma delimited list of ip addresses from the Ceph monitors as the value for the CephExternalMonHost parameter. You must select a unique value for the CephClusterName parameter amongst edge sites. Reusing a name will result in the configuration file being overwritten. If you deployed Red Hat Ceph storage using Red Hat OpenStack Platform director at the central location, then you can export the ceph configuration to an environment file central_ceph_external.yaml . This environment file connects DCN sites to the central hub Ceph cluster, so the information is specific to the Ceph cluster deployed in the steps: If the central location has Red Hat Ceph Storage deployed externally, then you cannot use the openstack overcloud export ceph command to generate the central_ceph_external.yaml file. You must create the central_ceph_external.yaml file manually instead: Create an environment file with similar details about each site with an unmanaged Red Hat Ceph Storage cluster for the central location. The openstack overcloud export ceph command does not work for sites with unmanaged Red Hat Ceph Storage clusters. When you update the central location, this file will allow the central location the storage clusters at your edge sites as secondary locations Use the ceph-ansible-external.yaml, ceph-external-<SITE>.yaml, and the central_ceph_external.yaml environment files when deploying the overcloud: Redeploy the central location after all edge locations have been deployed. 7.4. Creating additional distributed compute node sites A new distributed compute node (DCN) site has its own directory of YAML files on the undercloud. For more information, see Section 4.7, "Managing separate heat stacks" . This procedure contains example commands. Procedure As the stack user on the undercloud, create a new directory for dcn9 : Copy the existing dcn0 templates to the new directory and replace the dcn0 strings with dcn9 : Review the files in the dcn9 directory to confirm that they suit your requirements. Edit undercloud.conf to add a new leaf. In the following example, leaf9 is added to undercloud.conf: Rerun the openstack undercloud install command to update the environment configuration. In your overcloud templates, update the value of the NetworkDeploymentActions parameter from a value of ["CREATE"] , to a value of ["CREATE", "UPDATE"] . If this parameter is not currently included in your templates, add it to one of your environment files, or create a new environment file. Run the deploy script for the central location. Include all templates that you used when you first deployed the central location, as well as the newly created or edited network-environment.yaml file: Verify that your nodes are available and in Provisioning state : When your nodes are available, deploy the new edge site with all appropriate templates: If you've deployed the locations with direct edge-to-edge communication, you must redeploy each edge site to update routes and establish communication with the new location. 7.5. Updating the central location After you configure and deploy all of the edge sites using the sample procedure, update the configuration at the central location so that the central Image service can push images to the edge sites. Warning This procedure restarts the Image service (glance) and interrupts any long running Image service process. For example, if an image is being copied from the central Image service server to a DCN Image service server, that image copy is interrupted and you must restart it. For more information, see Clearing residual data after interrupted Image service processes . Procedure Create a ~/central/glance_update.yaml file similar to the following. This example includes a configuration for two edge sites, dcn0 and dcn1: Create the dcn_ceph.yaml file. In the following example, this file configures the glance service at the central site as a client of the Ceph clusters of the edge sites, dcn0 and dcn1 . Redeploy the central site using the original templates and include the newly created dcn_ceph.yaml and glance_update.yaml files. On a controller at the central location, restart the cinder-volume service. If you deployed the central location with the cinder-backup service, then restart the cinder-backup service too: 7.5.1. Clearing residual data after interrupted Image service processes When you restart the central location, any long-running Image service (glance) processes are interrupted. Before you can restart these processes, you must first clean up residual data on the Controller node that you rebooted, and in the Ceph and Image service databases. Procedure Check and clear residual data in the Controller node that was rebooted. Compare the files in the glance-api.conf file for staging store with the corresponding images in the Image service database, for example <image_ID>.raw . If these corresponding images show importing status, you must recreate the image. If the images show active status, you must delete the data from staging and restart the copy import. Check and clear residual data in Ceph stores. The images that you cleaned from the staging area must have matching records in their stores property in the Ceph stores that contain the image. The image name in Ceph is the image id in the Image service database. Clear the Image service database. Clear any images that are in importing status from the import jobs there were interrupted: 7.6. Deploying Red Hat Ceph Storage Dashboard on DCN Procedure To deploy the Red Hat Ceph Storage Dashboard to the central location, see Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment . These steps should be completed prior to deploying the central location. To deploy Red Hat Ceph Storage Dashboard to edge locations, complete the same steps that you completed for central, however you must complete the following following: Ensure that the ManageNetworks parameter has a value of false in your templates for deploying the edge site. When you set ManageNetworks to false , Edge sites will use the existing networks that were already created in the central stack: You must deploy your own solution for load balancing in order to create a high availability virtual IP. Edge sites do not deploy haproxy, nor pacemaker. When you deploy Red Hat Ceph Storage Dashboard to edge locations, the deployment is exposed on the storage network. The dashboard is installed on each of the three DistributedComputeHCI nodes with distinct IP addresses without a load balancing solution. You can create an additional network to host virtual IP where the Ceph dashboard can be exposed. You must not be reusing network resources for multiple stacks. For more information on reusing network resources, see Reusing network resources in multiple stacks . To create this additional network resource, use the provided network_data_dashboard.yaml heat template. The name of the created network is StorageDashboard . Procedure Log in to Red Hat OpenStack Platform Director as stack . Generate the DistributedComputeHCIDashboard role and any other roles appropriate for your environment: Include the roles.yaml and the network_data_dashboard.yaml in the overcloud deploy command: Note The deployment provides the three ip addresses where the dashboard is enabled on the storage network. Verification To confirm the dashboard is operational at the central location and that the data it displays from the Ceph cluster is correct, see Accessing Ceph Dashboard . You can confirm that the dashboard is operating at an edge location through similar steps, however, there are exceptions as there is no load balancer at edge locations. Retrieve dashboard admin login credentials specific to the selected stack from /var/lib/mistral/<stackname>/ceph-ansible/group_vars/all.yml Within the inventory specific to the selected stack, /var/lib/mistral/<stackname>/ceph-ansible/inventory.yml , locate the DistributedComputeHCI role hosts list and save all three of the storage_ip values. In the example below the first two dashboard IPs are 172.16.11.84 and 172.16.11.87: You can check that the Ceph Dashboard is active at one of these IP addresses if they are accessible to you. These IP addresses are on the storage network and are not routed. If these IP addresses are not available, you must configure a load balancer for the three IP addresses that you get from the inventory to obtain a virtual IP address for verification. | [
"openstack overcloud export --config-download-dir /var/lib/mistral/central/ --stack central --output-file ~/dcn-common/central-export.yaml",
"sudo -E openstack overcloud export ceph --stack central --config-download-dir /var/lib/mistral --output-file ~/dcn-common/central_ceph_external.yaml",
"parameter_defaults: CephExternalMultiConfig: - cluster: \"central\" fsid: \"3161a3b4-e5ff-42a0-9f53-860403b29a33\" external_cluster_mon_ips: \"172.16.11.84, 172.16.11.87, 172.16.11.92\" keys: - name: \"client.openstack\" caps: mgr: \"allow *\" mon: \"profile rbd\" osd: \"profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images\" key: \"AQD29WteAAAAABAAphgOjFD7nyjdYe8Lz0mQ5Q==\" mode: \"0600\" dashboard_enabled: false ceph_conf_overrides: client: keyring: /etc/ceph/central.client.openstack.keyring",
"[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19",
"ceph auth list [client.openstack] key = AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw== caps mgr = \"allow *\" caps mon = \"profile rbd\" caps osd = \"profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics\"",
"parameter_defaults: GlanceEnabledImportMethods: web-download,copy-image GlanceBackend: rbd GlanceStoreDescription: 'dcn0 rbd glance store' GlanceBackendID: dcn0 GlanceMultistoreConfig: central: GlanceBackend: rbd GlanceStoreDescription: 'central rbd glance store' CephClusterName: central GlanceRbdPoolName: images CephClientUserName: openstack",
"cat > /home/stack/dcn0/ceph.yaml << EOF parameter_defaults: CephClusterName: dcn0 CephAnsibleDisksConfig: osd_scenario: lvm osd_objectstore: bluestore devices: - /dev/sda - /dev/sdb CephPoolDefaultSize: 3 CephPoolDefaultPgNum: 128 EOF",
"cat > /home/stack/dcn0/tuning.yaml << EOF parameter_defaults: CephAnsibleExtraConfig: is_hci: true CephConfigOverrides: osd_recovery_op_priority: 3 osd_recovery_max_active: 3 osd_max_backfills: 1 ## Set relative to your hardware: # DistributedComputeHCIParameters: # NovaReservedHostMemory: 181000 # DistributedComputeHCIExtraConfig: # nova::cpu_allocation_ratio: 8.2 EOF",
"cat > /home/stack/central/site-name.yaml << EOF parameter_defaults: NovaComputeAvailabilityZone: dcn0 NovaCrossAZAttach: false CinderStorageAvailabilityZone: dcn0 CinderVolumeCluster: dcn0",
"openstack overcloud roles generate DistributedComputeHCI DistributedComputeHCIScaleOut -o ~/dcn0/roles_data.yaml",
"parameter_defaults: ControllerCount: 0 ComputeCount: 0 DistributedComputeHCICount: 3 DistributedComputeHCIScaleOutCount: 1 # Optional DistributedComputeScaleOutCount: 1 # Optional",
"sudo openstack tripleo container image prepare --environment-directory dcn0 -r ~/dcn0/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /home/stack/dcn-common/central-export.yaml -e /home/stack/containers-prepare-parameter.yaml --output-env-file ~/dcn0/dcn0-images-env.yaml",
"openstack overcloud deploy --stack dcn0 --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/dcn0/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-storage.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e ~/dnc0/dcn0-images-env.yaml . -e ~/dcn-common/central-export.yaml -e ~/dcn-common/central_ceph_external.yaml -e ~/dcn0/dcn_ceph_keys.yaml -e ~/dcn0/role-counts.yaml -e ~/dcn0/ceph.yaml -e ~/dcn0/site-name.yaml -e ~/dcn0/tuning.yaml -e ~/dcn0/glance.yaml",
"TRIPLEO_PLAN_NAME=central ansible -i /usr/bin/tripleo-ansible-inventory nova_api[0] -b -a \"{{ container_cli }} exec -it nova_api nova-manage cell_v2 discover_hosts --by-service --verbose\"",
"openstack overcloud export --config-download-dir /var/lib/mistral/central/ --stack central --output-file ~/dcn-common/central-export.yaml",
"sudo -E openstack overcloud export ceph --stack central --config-download-dir /var/lib/mistral --output-file ~/dcn-common/central_ceph_external.yaml",
"parameter_defaults: GlanceEnabledImportMethods: web-download,copy-image GlanceBackend: rbd GlanceStoreDescription: 'dcn0 rbd glance store' GlanceBackendID: dcn0 GlanceMultistoreConfig: central: GlanceBackend: rbd GlanceStoreDescription: 'central rbd glance store' CephClientUserName: 'openstack' CephClusterName: central",
"cat > /home/stack/dcn0/ceph.yaml << EOF parameter_defaults: CephClusterName: dcn0 CephAnsibleDisksConfig: osd_scenario: lvm osd_objectstore: bluestore devices: - /dev/sda - /dev/sdb CephPoolDefaultSize: 3 CephPoolDefaultPgNum: 128 EOF",
"cat > /home/stack/dcn0/site-name.yaml << EOF parameter_defaults: NovaComputeAvailabilityZone: dcn0 NovaCrossAZAttach: false CinderStorageAvailabilityZone: dcn0 CinderVolumeCluster: dcn0",
"openstack overcloud roles generate DistributedCompute DistributedComputeScaleOut CephAll-o ~/dcn0/roles_data.yaml",
"parameter_defaults: ControllerCount: 0 ComputeCount: 0 DistributedComputeCount: 3 CephAll: 3 DistributedComputeScaleOutCount: 1 # Optional",
"sudo openstack tripleo container image prepare --environment-directory dcn0 -r ~/dcn0/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /home/stack/dcn-common/central-export.yaml -e /home/stack/containers-prepare-parameter.yaml --output-env-file ~/dcn0/dcn0-images-env.yaml",
"openstack overcloud deploy --stack dcn0 --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/dcn0/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-storage.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e ~/dnc0/dcn0-images-env.yaml . -e ~/dcn-common/central-export.yaml -e ~/dcn-common/central_ceph_external.yaml -e ~/dcn0/dcn_ceph_keys.yaml -e ~/dcn0/role-counts.yaml -e ~/dcn0/ceph.yaml -e ~/dcn0/site-name.yaml -e ~/dcn0/tuning.yaml -e ~/dcn0/glance.yaml",
"TRIPLEO_PLAN_NAME=central ansible -i /usr/bin/tripleo-ansible-inventory nova_api[0] -b -a \"{{ container_cli }} exec -it nova_api nova-manage cell_v2 discover_hosts --by-service --verbose\"",
"ceph osd pool create volumes <_PGnum_> ceph osd pool create images <_PGnum_> ceph osd pool create vms <_PGnum_> ceph osd pool create backups <_PGnum_> ceph osd pool create metrics <_PGnum_>",
"ceph auth add client.openstack mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images'",
"[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19",
"parameter_defaults: # The cluster FSID CephClusterFSID: '4b5c8c0a-ff60-454b-a1b4-9747aa737d19' # The CephX user auth key CephClientKey: 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==' # The list of IPs or hostnames of the Ceph monitors CephExternalMonHost: '172.16.1.7, 172.16.1.8, 172.16.1.9' # The desired name of the generated key and conf files CephClusterName: dcn1",
"sudo -E openstack overcloud export ceph --stack central --config-download-dir /var/lib/mistral --output-file ~/dcn-common/central_ceph_external.yaml",
"parameter_defaults: CephExternalMultiConfig: - cluster: \"central\" fsid: \"3161a3b4-e5ff-42a0-9f53-860403b29a33\" external_cluster_mon_ips: \"172.16.11.84, 172.16.11.87, 172.16.11.92\" keys: - name: \"client.openstack\" caps: mgr: \"allow *\" mon: \"profile rbd\" osd: \"profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images\" key: \"AQD29WteAAAAABAAphgOjFD7nyjdYe8Lz0mQ5Q==\" mode: \"0600\" dashboard_enabled: false ceph_conf_overrides: client: keyring: /etc/ceph/central.client.openstack.keyring",
"parameter_defaults: CephExternalMultiConfig: cluster: dcn1 ... cluster: dcn2 ...",
"openstack overcloud deploy --stack dcn1 --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/dcn1/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-hci.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e ~/dnc1/ceph-external-dcn1.yaml . -e ~/dcn-common/central-export.yaml -e ~/dcn-common/central_ceph_external.yaml -e ~/dcn1/dcn_ceph_keys.yaml -e ~/dcn1/role-counts.yaml -e ~/dcn1/ceph.yaml -e ~/dcn1/site-name.yaml -e ~/dcn1/tuning.yaml -e ~/dcn1/glance.yaml",
"cd ~ mkdir dcn9",
"cp dcn0/ceph.yaml dcn9/ceph.yaml sed s/dcn0/dcn9/g -i dcn9/ceph.yaml cp dcn0/overrides.yaml dcn9/overrides.yaml sed s/dcn0/dcn9/g -i dcn9/overrides.yaml sed s/\"0-ceph-%index%\"/\"9-ceph-%index%\"/g -i dcn9/overrides.yaml cp dcn0/deploy.sh dcn9/deploy.sh sed s/dcn0/dcn9/g -i dcn9/deploy.sh",
"[leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False ... [leaf9] cidr = 192.168.19.0/24 dhcp_start = 192.168.19.10 dhcp_end = 192.168.19.90 inspection_iprange = 192.168.19.100,192.168.19.190 gateway = 192.168.10.1 masquerade = False",
"cat > /home/stack/central/network-environment.yaml << EOF parameter_defaults: NetworkDeploymentActions: [\"CREATE\", \"UPDATE\"] EOF",
"openstack overcloud deploy --stack central --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/central/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-hci.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e ~/central/dcn9-images-env.yaml . -e ~/dcn-common/central-export.yaml -e ~/dcn-common/central_ceph_external.yaml -e ~/central/dcn_ceph_keys.yaml -e ~/central/role-counts.yaml -e ~/central/ceph.yaml -e ~/central/site-name.yaml -e ~/central/tuning.yaml -e ~/central/glance.yaml",
"openstack baremetal node list",
"openstack overcloud deploy --stack dcn9 --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/dcn9/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-hci.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e ~/dnc9/dcn9-images-env.yaml . -e ~/dcn-common/central-export.yaml -e ~/dcn-common/central_ceph_external.yaml -e ~/dcn9/dcn_ceph_keys.yaml -e ~/dcn9/role-counts.yaml -e ~/dcn9/ceph.yaml -e ~/dcn9/site-name.yaml -e ~/dcn9/tuning.yaml -e ~/dcn9/glance.yaml",
"parameter_defaults: GlanceEnabledImportMethods: web-download,copy-image GlanceBackend: rbd GlanceStoreDescription: 'central rbd glance store' CephClusterName: central GlanceBackendID: central GlanceMultistoreConfig: dcn0: GlanceBackend: rbd GlanceStoreDescription: 'dcn0 rbd glance store' CephClientUserName: 'openstack' CephClusterName: dcn0 GlanceBackendID: dcn0 dcn1: GlanceBackend: rbd GlanceStoreDescription: 'dcn1 rbd glance store' CephClientUserName: 'openstack' CephClusterName: dcn1 GlanceBackendID: dcn1",
"sudo -E openstack overcloud export ceph --stack dcn0,dcn1 --config-download-dir /var/lib/mistral --output-file ~/central/dcn_ceph.yaml",
"openstack overcloud deploy --stack central --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/central/central_roles.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e ~/central/central-images-env.yaml -e ~/central/role-counts.yaml -e ~/central/site-name.yaml -e ~/central/ceph.yaml -e ~/central/ceph_keys.yaml -e ~/central/glance.yaml -e ~/central/dcn_ceph_external.yaml",
"ssh heat-admin@controller-0 sudo pcs resource restart openstack-cinder-volume ssh heat-admin@controller-0 sudo pcs resource restart openstack-cinder-backup",
"glance image-delete <image_id>",
"parameter_defaults: ManageNetworks: false",
"openstack overcloud roles generate DistributedComputeHCIDashboard -o ~/dnc0/roles.yaml",
"openstack overcloud deploy --templates -r ~/<dcn>/<dcn_site_roles>.yaml -n /usr/share/openstack-tripleo-heat-templates/network_data_dashboard.yaml -e <overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-dashboard.yaml",
"DistributedComputeHCI: hosts: dcn1-distributed-compute-hci-0: ansible_host: 192.168.24.16 storage_hostname: dcn1-distributed-compute-hci-0.storage.localdomain storage_ip: 172.16.11.84 dcn1-distributed-compute-hci-1: ansible_host: 192.168.24.22 storage_hostname: dcn1-distributed-compute-hci-1.storage.localdomain storage_ip: 172.16.11.87"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/distributed_compute_node_and_storage_deployment/assembly_deploying-storage-at-the-edge |
Chapter 18. Node [config.openshift.io/v1] | Chapter 18. Node [config.openshift.io/v1] Description Node holds cluster-wide information about node specific features. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 18.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values. 18.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description cgroupMode string CgroupMode determines the cgroups version on the node workerLatencyProfile string WorkerLatencyProfile determins the how fast the kubelet is updating the status and corresponding reaction of the cluster 18.1.2. .status Description status holds observed values. Type object 18.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/nodes DELETE : delete collection of Node GET : list objects of kind Node POST : create a Node /apis/config.openshift.io/v1/nodes/{name} DELETE : delete a Node GET : read the specified Node PATCH : partially update the specified Node PUT : replace the specified Node /apis/config.openshift.io/v1/nodes/{name}/status GET : read status of the specified Node PATCH : partially update status of the specified Node PUT : replace status of the specified Node 18.2.1. /apis/config.openshift.io/v1/nodes HTTP method DELETE Description delete collection of Node Table 18.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Node Table 18.2. HTTP responses HTTP code Reponse body 200 - OK NodeList schema 401 - Unauthorized Empty HTTP method POST Description create a Node Table 18.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.4. Body parameters Parameter Type Description body Node schema Table 18.5. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 202 - Accepted Node schema 401 - Unauthorized Empty 18.2.2. /apis/config.openshift.io/v1/nodes/{name} Table 18.6. Global path parameters Parameter Type Description name string name of the Node HTTP method DELETE Description delete a Node Table 18.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 18.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Node Table 18.9. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Node Table 18.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.11. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Node Table 18.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.13. Body parameters Parameter Type Description body Node schema Table 18.14. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty 18.2.3. /apis/config.openshift.io/v1/nodes/{name}/status Table 18.15. Global path parameters Parameter Type Description name string name of the Node HTTP method GET Description read status of the specified Node Table 18.16. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Node Table 18.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.18. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Node Table 18.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.20. Body parameters Parameter Type Description body Node schema Table 18.21. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/node-config-openshift-io-v1 |
12.2. LVM Partition Management | 12.2. LVM Partition Management The following commands can be found by issuing lvm help at a command prompt. Table 12.2. LVM commands Command Description dumpconfig Dump the active configuration formats List the available metadata formats help Display the help commands lvchange Change the attributes of logical volume(s) lvcreate Create a logical volume lvdisplay Display information about a logical volume lvextend Add space to a logical volume lvmchange Due to use of the device mapper, this command has been deprecated lvmdiskscan List devices that may be used as physical volumes lvmsadc Collect activity data lvmsar Create activity report lvreduce Reduce the size of a logical volume lvremove Remove logical volume(s) from the system lvrename Rename a logical volume lvresize Resize a logical volume lvs Display information about logical volumes lvscan List all logical volumes in all volume groups pvchange Change attributes of physical volume(s) pvcreate Initialize physical volume(s) for use by LVM pvdata Display the on-disk metadata for physical volume(s) pvdisplay Display various attributes of physical volume(s) pvmove Move extents from one physical volume to another pvremove Remove LVM label(s) from physical volume(s) pvresize Resize a physical volume in use by a volume group pvs Display information about physical volumes pvscan List all physical volumes segtypes List available segment types vgcfgbackup Backup volume group configuration vgcfgrestore Restore volume group configuration vgchange Change volume group attributes vgck Check the consistency of a volume group vgconvert Change volume group metadata format vgcreate Create a volume group vgdisplay Display volume group information vgexport Unregister a volume group from the system vgextend Add physical volumes to a volume group vgimport Register exported volume group with system vgmerge Merge volume groups vgmknodes Create the special files for volume group devices in /dev/ vgreduce Remove a physical volume from a volume group vgremove Remove a volume group vgrename Rename a volume group vgs Display information about volume groups vgscan Search for all volume groups vgsplit Move physical volumes into a new volume group version Display software and driver version information | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/managing_disk_storage-lvm_partition_management |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/migrating_applications_to_red_hat_build_of_quarkus_3.2/making-open-source-more-inclusive |
Chapter 2. Setting up Maven locally | Chapter 2. Setting up Maven locally Maven is the typical choice for Red Hat build of Apache Camel application development and project management. 2.1. Preparing to set up Maven Maven is a free, open source, build tool from Apache. Procedure Download Maven 3.8.6 or later from the Maven download page . Tip To verify that you have the correct Maven and JDK version installed, open a command terminal and enter the following command: Check the output to verify that Maven is version 3.8.6 or newer, and is using OpenJDK 17. Ensure that your system is connected to the Internet. While building a project, the default behavior is that Maven searches external repositories and downloads the required artifacts. Maven looks for repositories that are accessible over the Internet. You can change this behavior so that Maven searches only repositories that are on a local network. That is, Maven can run in an offline mode. In offline mode, Maven looks for artifacts in its local repository. See Section 2.4, "Using local Maven repositories" . 2.2. Adding Red Hat repositories to Maven To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven's settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user's home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml . Prerequisite You know the location of the settings.xml file in which you want to add the Red Hat repositories. Procedure In the settings.xml file, add repository elements for the Red Hat repositories as shown in this example: Note If you are using the camel-jira component, also add the atlassian repository. Note If you want to use technology preview builds, also add the earlyaccess repository. <?xml version="1.0"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>atlassian</id> <url>https://packages.atlassian.com/artifactory/maven-public/</url> <name>atlassian external repo</name> <snapshots> <enabled>false</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings> 2.3. Building an offline Maven repository Red Hat build of Apache Camel for Spring Boot users can build their own offline Maven repository which is used in a restricted environment. For each release of Red Hat build of Apache Camel for Spring Boot users can download the zip file from the Red Hat Customer Portal. Procedure Download the offile Maven repository builder from the customer portal. For example, for Red Hat build of Camel Spring Boot version 4.4, use the Offline Maven builder . The downloaded file is a zip file that contains everything to build an offline Maven repository for this specific release. Unzip the downloaded zip file. The directory structure of the archive is as follows: This zip contains the following files: build-offline-repo.sh - A wrapper script around the Offliner tool. offliner-2.0.jar - Downloads the artifacts in the manifest. redhat-camel-4.4.0-offline-manifest.txt Lists the required artifacts that need to be downloaded. redhat-camel-spring-boot-4.4.0-offline-manifest.txt Lists the required artifacts that need to be downloaded. README - Explains the steps and commands required for building the offline Maven repository. To build an offline repository, run the build-offline-repo.sh script as per instructions given in the README file. Optionally you can specify a directory where the artifacts should be downloaded to. If not specified, a directory called 'repository' is created in the current working directory. If needed, you can configure the tool to use additional Maven repositories, by adding them to file maven-repositories.txt . This is generally not necessary as the tool is pre-configured with the right set of Maven repositories. In case of a HTTP proxy and any HTTP calls that need to go via this proxy, you may need to change the script. Add the arguments --proxy <proxy-host> --proxy-user <proxy-user> --proxy-pass <proxy-pass> in the line that invokes the JVM in the script. You can use the option -v to print the version number of the script. This version is the version number of the script and not related to the Red Hat build of Apache Camel product version. Troubleshooting You can configure the logging via the provided logback.xml file. When the shell script is executed, any download activity will be written to the log file offliner.log and any download failures are listed in errors.log . At the end of the execution the offliner tool displays a summary of the downloaded and failed artifacts, but we also recommend to scan through errors.log for any download failures. If any artifacts are failed to be downloaded, re-run the tool against the same target folder. The tool will avoid to download artifacts that it already downloaded and only attempt those that it failed on previously. 2.4. Using local Maven repositories If you are running a container without an Internet connection, and you need to deploy an application that has dependencies that are not available offline, you can use the Maven dependency plug-in to download the application's dependencies into a Maven offline repository. You can then distribute this customized Maven offline repository to machines that do not have an Internet connection. Procedure In the project directory that contains the pom.xml file, download a repository for a Maven project by running a command such as the following: In this example, Maven dependencies and plug-ins that are required to build the project are downloaded to the /tmp/my-project directory. Distribute this customized Maven offline repository internally to any machines that do not have an Internet connection. 2.5. Setting Maven mirror using environmental variables or system properties When running the applications you need access to the artifacts that are in the Red Hat Maven repositories. These repositories are added to Maven's settings.xml file. Maven checks the following locations for settings.xml file: looks for the specified url if not found looks for USD{user.home}/.m2/settings.xml if not found looks for USD{maven.home}/conf/settings.xml if not found looks for USD{M2_HOME}/conf/settings.xml if no location is found, empty org.apache.maven.settings.Settings instance is created. 2.5.1. About Maven mirror Maven uses a set of remote repositories to access the artifacts, which are currently not available in local repository. The list of repositories almost always contains Maven Central repository, but for Red Hat Fuse, it also contains Maven Red Hat repositories. In some cases where it is not possible or allowed to access different remote repositories, you can use a mechanism of Maven mirrors. A mirror replaces a particular repository URL with a different one, so all HTTP traffic when remote artifacts are being searched for can be directed to a single URL. 2.5.2. Adding Maven mirror to settings.xml To set the Maven mirror, add the following section to Maven's settings.xml : No mirror is used if the above section is not found in the settings.xml file. To specify a global mirror without providing the XML configuration, you can use either system property or environmental variables. 2.5.3. Setting Maven mirror using environmental variable or system property To set the Maven mirror using either environmental variable or system property, you can add: Environmental variable called MAVEN_MIRROR_URL to bin/setenv file System property called mavenMirrorUrl to etc/system.properties file 2.5.4. Using Maven options to specify Maven mirror url To use an alternate Maven mirror url, other than the one specified by environmental variables or system property, use the following maven options when running the application: -DmavenMirrorUrl=mirrorId::mirrorUrl for example, -DmavenMirrorUrl=my-mirror::http://mirror.net/repository -DmavenMirrorUrl=mirrorUrl for example, -DmavenMirrorUrl=http://mirror.net/repository . In this example, the <id> of the <mirror> is just a mirror. 2.6. About Maven artifacts and coordinates In the Maven build system, the basic building block is an artifact . After a build, the output of an artifact is typically an archive, such as a JAR or WAR file. A key aspect of Maven is the ability to locate artifacts and manage the dependencies between them. A Maven coordinate is a set of values that identifies the location of a particular artifact. A basic coordinate has three values in the following form: groupId:artifactId:version Sometimes Maven augments a basic coordinate with a packaging value or with both a packaging value and a classifier value. A Maven coordinate can have any one of the following forms: Here are descriptions of the values: groupdId Defines a scope for the name of the artifact. You would typically use all or part of a package name as a group ID. For example, org.fusesource.example . artifactId Defines the artifact name relative to the group ID. version Specifies the artifact's version. A version number can have up to four parts: n.n.n.n , where the last part of the version number can contain non-numeric characters. For example, the last part of 1.0-SNAPSHOT is the alphanumeric substring, 0-SNAPSHOT . packaging Defines the packaged entity that is produced when you build the project. For OSGi projects, the packaging is bundle . The default value is jar . classifier Enables you to distinguish between artifacts that were built from the same POM, but have different content. Elements in an artifact's POM file define the artifact's group ID, artifact ID, packaging, and version, as shown here: <project ... > ... <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> ... </project> To define a dependency on the preceding artifact, you would add the following dependency element to a POM file: <project ... > ... <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> ... </project> Note It is not necessary to specify the bundle package type in the preceding dependency, because a bundle is just a particular kind of JAR file and jar is the default Maven package type. If you do need to specify the packaging type explicitly in a dependency, however, you can use the type element. | [
"mvn --version",
"<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>atlassian</id> <url>https://packages.atlassian.com/artifactory/maven-public/</url> <name>atlassian external repo</name> <snapshots> <enabled>false</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>",
"├── README ├── build-offline-repo.sh ├── errors.log ├── logback.xml ├── maven-repositories.txt ├── offliner-2.0-sources.jar ├── offliner-2.0-sources.jar.md5 ├── offliner-2.0.jar ├── offliner-2.0.jar.md5 ├── offliner.log ├── rhaf-camel-offliner-4.4.0.txt └── rhaf-camel-spring-boot-offliner-4.4.0.txt",
"mvn org.apache.maven.plugins:maven-dependency-plugin:3.1.0:go-offline -Dmaven.repo.local=/tmp/my-project",
"<mirror> <id>all</id> <mirrorOf>*</mirrorOf> <url>http://host:port/path</url> </mirror>",
"groupId:artifactId:version groupId:artifactId:packaging:version groupId:artifactId:packaging:classifier:version",
"<project ... > <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> </project>",
"<project ... > <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/getting_started_with_red_hat_build_of_apache_camel_for_spring_boot/set-up-maven-locally |
1.2. Release overview | 1.2. Release overview 1.2.1. New features in Red Hat Enterprise Linux 6 See the Release Notes for the latest minor version of Red Hat Enterprise Linux 6 to learn about the newest features. To learn about features introduced in earlier releases, see the Release Notes for respective minor versions of Red Hat Enterprise Linux 6. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/ch01s02 |
Chapter 6. About Logging | Chapter 6. About Logging As a cluster administrator, you can deploy logging on an OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. You can forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. You can also visualize your log data in the OpenShift Container Platform web console, or the Kibana web console, depending on your deployed log storage solution. Note The Kibana web console is now deprecated is planned to be removed in a future logging release. OpenShift Container Platform cluster administrators can deploy logging by using Operators. For information, see Installing logging . The Operators are responsible for deploying, upgrading, and maintaining logging. After the Operators are installed, you can create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support logging. You can also create a ClusterLogForwarder CR to specify which logs are collected, how they are transformed, and where they are forwarded to. Note Because the internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in Forward audit logs to the log store . 6.1. Logging architecture The major components of the logging are: Collector The collector is a daemonset that deploys pods to each OpenShift Container Platform node. It collects log data from each node, transforms the data, and forwards it to configured outputs. You can use the Vector collector or the legacy Fluentd collector. Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Log store The log store stores log data for analysis and is the default output for the log forwarder. You can use the default LokiStack log store, the legacy Elasticsearch log store, or forward logs to additional external log stores. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Visualization You can use a UI component to view a visual representation of your log data. The UI provides a graphical interface to search, query, and view stored logs. The OpenShift Container Platform web console UI is provided by enabling the OpenShift Container Platform console plugin. Note The Kibana web console is now deprecated is planned to be removed in a future logging release. Logging collects container logs and node logs. These are categorized into types: Application logs Container logs generated by user applications running in the cluster, except infrastructure container applications. Infrastructure logs Container logs generated by infrastructure namespaces: openshift* , kube* , or default , as well as journald messages from nodes. Audit logs Logs generated by auditd, the node audit system, which are stored in the /var/log/audit/audit.log file, and logs from the auditd , kube-apiserver , openshift-apiserver services, as well as the ovn project if enabled. Additional resources Log visualization with the web console 6.2. About deploying logging Administrators can deploy the logging by using the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to install the logging Operators. The Operators are responsible for deploying, upgrading, and maintaining the logging. Administrators and application developers can view the logs of the projects for which they have view access. 6.2.1. Logging custom resources You can configure your logging deployment with custom resource (CR) YAML files implemented by each Operator. Red Hat OpenShift Logging Operator : ClusterLogging (CL) - After the Operators are installed, you create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support the logging. The ClusterLogging CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The Red Hat OpenShift Logging Operator watches the ClusterLogging CR and adjusts the logging deployment accordingly. ClusterLogForwarder (CLF) - Generates collector configuration to forward logs per user configuration. Loki Operator : LokiStack - Controls the Loki cluster as log store and the web proxy with OpenShift Container Platform authentication integration to enforce multi-tenancy. OpenShift Elasticsearch Operator : Note These CRs are generated and managed by the OpenShift Elasticsearch Operator. Manual changes cannot be made without being overwritten by the Operator. ElasticSearch - Configure and deploy an Elasticsearch instance as the default log store. Kibana - Configure and deploy Kibana instance to search, query and view logs. 6.2.2. About JSON OpenShift Container Platform Logging You can use JSON logging to configure the Log Forwarding API to parse JSON strings into a structured object. You can perform the following tasks: Parse JSON logs Configure JSON log data for Elasticsearch Forward JSON logs to the Elasticsearch log store 6.2.3. About collecting and storing Kubernetes events The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by OpenShift Container Platform Logging. You must manually deploy the Event Router. For information, see About collecting and storing Kubernetes events . 6.2.4. About troubleshooting OpenShift Container Platform Logging You can troubleshoot the logging issues by performing the following tasks: Viewing logging status Viewing the status of the log store Understanding logging alerts Collecting logging data for Red Hat Support Troubleshooting for critical alerts 6.2.5. About exporting fields The logging system exports fields. Exported fields are present in the log records and are available for searching from Elasticsearch and Kibana. For information, see About exporting fields . 6.2.6. About event routing The Event Router is a pod that watches OpenShift Container Platform events so they can be collected by logging. The Event Router collects events from all projects and writes them to STDOUT . Fluentd collects those events and forwards them into the OpenShift Container Platform Elasticsearch instance. Elasticsearch indexes the events to the infra index. You must manually deploy the Event Router. For information, see Collecting and storing Kubernetes events . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/cluster-logging |
19.8. Administering User Tasks From the Command Line | 19.8. Administering User Tasks From the Command Line You can use the ovirt-aaa-jdbc-tool tool to manage user accounts on the internal domain. Changes made using the tool take effect immediately and do not require you to restart the ovirt-engine service. For a full list of user options, run ovirt-aaa-jdbc-tool user --help . Common examples are provided in this section. Important You must be logged into the Manager machine. 19.8.1. Creating a New User You can create a new user account. The optional --attribute command specifies account details. For a full list of options, run ovirt-aaa-jdbc-tool user add --help . You can add the newly created user in the Administration Portal and assign the user appropriate roles and permissions. See Section 19.7.1, "Adding Users and Assigning VM Portal Permissions" for more information. 19.8.2. Setting a User Password You can create a password. You must set a value for --password-valid-to , otherwise the password expiry time defaults to the current time. The date format is yyyy-MM-dd HH:mm:ssX . In this example, -0800 stands for GMT minus 8 hours. For more options, run ovirt-aaa-jdbc-tool user password-reset --help . Note By default, the password policy for user accounts on the internal domain has the following restrictions: A minimum of 6 characters. Three passwords used cannot be set again during the password change. For more information on the password policy and other default settings, run ovirt-aaa-jdbc-tool settings show . 19.8.3. Setting User Timeout You can set the user timeout period: 19.8.4. Pre-encrypting a User Password You can create a pre-encrypted user password using the ovirt-engine-crypto-tool script. This option is useful if you are adding users and passwords to the database with a script. Note Passwords are stored in the Manager database in encrypted form. The ovirt-engine-crypto-tool script is used because all passwords must be encrypted with the same algorithm. If the password is pre-encrypted, password validity tests cannot be performed. The password will be accepted even if it does not comply with the password validation policy. Run the following command: The script will prompt you to enter the password. Alternatively, you can use the --password=file: file option to encrypt a single password that appears as the first line of a file. This option is useful for automation. In the following example, file is a text file containing a single password for encryption: Set the new password with the ovirt-aaa-jdbc-tool script, using the --encrypted option: Enter and confirm the encrypted password: 19.8.5. Viewing User Information You can view detailed user account information: This command displays more information than in the Administration Portal's Administration Users screen. 19.8.6. Editing User Information You can update user information, such as the email address: 19.8.7. Removing a User You can remove a user account: Remove the user from the Administration Portal. See Section 19.7.4, "Removing Users" for more information. 19.8.8. Disabling the Internal Administrative User You can disable users on the local domains including the admin@internal user created during engine-setup . Make sure you have at least one user in the envrionment with full administrative permissions before disabling the default admin user. Disabling the Internal Administrative User Log in to the machine on which the Red Hat Virtualization Manager is installed. Make sure another user with the SuperUser role has been added to the environment. See Section 19.7.1, "Adding Users and Assigning VM Portal Permissions" for more information. Disable the default admin user: Note To enable a disabled user, run ovirt-aaa-jdbc-tool user edit username --flag=-disabled 19.8.9. Managing Groups You can use the ovirt-aaa-jdbc-tool tool to manage group accounts on your internal domain. Managing group accounts is similar to managing user accounts. For a full list of group options, run ovirt-aaa-jdbc-tool group --help . Common examples are provided in this section. Creating a Group This procedure shows you how to create a group account, add users to the group, and view the details of the group. Log in to the machine on which the Red Hat Virtualization Manager is installed. Create a new group: Add users to the group. The users must be created already. Note For a full list of the group-manage options, run ovirt-aaa-jdbc-tool group-manage --help . View group account details: Add the newly created group in the Administration Portal and assign the group appropriate roles and permissions. The users in the group inherit the roles and permissions of the group. See Section 19.7.1, "Adding Users and Assigning VM Portal Permissions" for more information. Creating Nested Groups This procedure shows you how to create groups within groups. Log in to the machine on which the Red Hat Virtualization Manager is installed. Create the first group: Create the second group: Add the second group to the first group: Add the first group in the Administration Portal and assign the group appropriate roles and permissions. See Section 19.7.1, "Adding Users and Assigning VM Portal Permissions" for more information. 19.8.10. Querying Users and Groups The query module allows you to query user and group information. For a full list of options, run ovirt-aaa-jdbc-tool query --help . Listing All User or Group Account Details This procedure shows you how to list all account information. Log in to the machine on which the Red Hat Virtualization Manager is installed. List the account details. All user account details: All group account details: Listing Filtered Account Details This procedure shows you how to apply filters when listing account information. Log in to the machine on which the Red Hat Virtualization Manager is installed. Filter account details using the --pattern parameter. List user account details with names that start with the character j . List groups that have the department attribute set to marketing : 19.8.11. Managing Account Settings To change the default account settings, use the ovirt-aaa-jdbc-tool settings module. Updating Account Settings This procedure shows you how to update the default account settings. Log in to the machine on which the Red Hat Virtualization Manager is installed. Run the following command to show all the settings available: Change the desired settings: This example updates the default log in session time to 60 minutes for all user accounts. The default value is 10080 minutes. This example updates the number of failed login attempts a user can perform before the user account is locked. The default value is 5. Note To unlock a locked user account, run ovirt-aaa-jdbc-tool user unlock test1 . | [
"ovirt-aaa-jdbc-tool user add test1 --attribute=firstName= John --attribute=lastName= Doe adding user test1 user added successfully",
"ovirt-aaa-jdbc-tool user password-reset test1 --password-valid-to= \"2025-08-01 12:00:00-0800\" Password: updating user test1 user updated successfully",
"engine-config --set UserSessionTimeOutInterval= integer",
"/usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh pbe-encode",
"/usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh pbe-encode --password=file: file",
"ovirt-aaa-jdbc-tool user password-reset test1 --password-valid-to=\"2025-08-01 12:00:00-0800\" --encrypted",
"Password: Reenter password: updating user test1 user updated successfully",
"ovirt-aaa-jdbc-tool user show test1",
"ovirt-aaa-jdbc-tool user edit test1 [email protected]",
"ovirt-aaa-jdbc-tool user delete test1",
"ovirt-aaa-jdbc-tool user edit admin --flag=+disabled",
"ovirt-aaa-jdbc-tool group add group1",
"ovirt-aaa-jdbc-tool group-manage useradd group1 --user= test1",
"ovirt-aaa-jdbc-tool group show group1",
"ovirt-aaa-jdbc-tool group add group1",
"ovirt-aaa-jdbc-tool group add group1-1",
"ovirt-aaa-jdbc-tool group-manage groupadd group1 --group= group1-1",
"ovirt-aaa-jdbc-tool query --what=user",
"ovirt-aaa-jdbc-tool query --what=group",
"ovirt-aaa-jdbc-tool query --what=user --pattern=\"name= j* \"",
"ovirt-aaa-jdbc-tool query --what=group --pattern=\"department= marketing \"",
"ovirt-aaa-jdbc-tool settings show",
"ovirt-aaa-jdbc-tool settings set --name=MAX_LOGIN_MINUTES --value= 60",
"ovirt-aaa-jdbc-tool settings set --name=MAX_FAILURES_SINCE_SUCCESS --value= 3"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-administering_user_tasks_from_the_commandline |
Chapter 6. Using the management API | Chapter 6. Using the management API AMQ Broker has an extensive management API, which you can use to modify a broker's configuration, create new resources (for example, addresses and queues), inspect these resources (for example, how many messages are currently held in a queue), and interact with them (for example, to remove messages from a queue). In addition, clients can use the management API to manage the broker and subscribe to management notifications. 6.1. Methods for managing AMQ Broker using the management API There are two ways to use the management API to manage the broker: Using JMX - JMX is the standard way to manage Java applications Using the JMS API - management operations are sent to the broker using JMS messages and the AMQ JMS client Although there are two different ways to manage the broker, each API supports the same functionality. If it is possible to manage a resource using JMX it is also possible to achieve the same result by using JMS messages and the AMQ JMS client. This choice depends on your particular requirements, application settings, and environment. Regardless of the way you invoke management operations, the management API is the same. For each managed resource, there exists a Java interface describing what can be invoked for this type of resource. The broker exposes its managed resources in the org.apache.activemq.artemis.api.core.management package. The way to invoke management operations depends on whether JMX messages or JMS messages and the AMQ JMS client are used. Note Some management operations require a filter parameter to choose which messages are affected by the operation. Passing null or an empty string means that the management operation will be performed on all messages . 6.2. Managing AMQ Broker using JMX You can use Java Management Extensions (JMX) to manage a broker. The management API is exposed by the broker using MBeans interfaces. The broker registers its resources with the domain org.apache.activemq . For example, the ObjectName to manage a queue named exampleQueue is: org.apache.activemq.artemis:broker="__BROKER_NAME__",component=addresses,address="exampleQueue",subcomponent=queues,routingtype="anycast",queue="exampleQueue" The MBean is: org.apache.activemq.artemis.api.management.QueueControl The MBean's ObjectName is built using the helper class org.apache.activemq.artemis.api.core.management.ObjectNameBuilder . You can also use jconsole to find the ObjectName of the MBeans you want to manage. Managing the broker using JMX is identical to management of any Java applications using JMX. It can be done by reflection or by creating proxies of the MBeans. 6.2.1. Configuring JMX management By default, JMX is enabled to manage the broker. You can enable or disable JMX management by setting the jmx-management-enabled property in the broker.xml configuration file. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Set <jmx-management-enabled> . <jmx-management-enabled>true</jmx-management-enabled> If JMX is enabled, the broker can be managed locally using jconsole . Note Remote connections to JMX are not enabled by default for security reasons. If you want to manage multiple brokers from the same MBeanServer , configure the JMX domain for each of the brokers. By default, the broker uses the JMX domain org.apache.activemq.artemis . <jmx-domain>my.org.apache.activemq</jmx-domain> Note If you are using AMQ Broker on a Windows system, system properties must be set in artemis , or artemis.cmd . A shell script is located under <install_dir> /bin . Additional resources For more information on configuring the broker for remote management, see Oracle's Java Management Guide . 6.2.2. Configuring JMX management access By default, remote JMX access to a broker is disabled for security reasons. However, AMQ Broker has a JMX agent that allows remote access to JMX MBeans. You enable JMX access by configuring a connector element in the broker management.xml configuration file. Note While it is also possible to enable JMX access using the `com.sun.management.jmxremote ` JVM system property, that method is not supported and is not secure. Modifying that JVM system property can bypass RBAC on the broker. To minimize security risks, consider limited access to localhost. Important Exposing the JMX agent of a broker for remote management has security implications. To secure your configuration as described in this procedure: Use SSL for all connections. Explicitly define the connector host, that is, the host and port to expose the agent on. Explicitly define the port that the RMI (Remote Method Invocation) registry binds to. Prerequisites A working broker instance The Java jconsole utility Procedure Open the <broker-instance-dir> /etc/management.xml configuration file. Define a connector for the JMX agent. The connector-port setting establishes an RMI registry that clients such as jconsole query for the JMX connector server. For example, to allow remote access on port 1099: <connector connector-port="1099"/> Verify the connection to the JMX agent using jconsole : Define additional properties on the connector, as described below. connector-host The broker server host to expose the agent on. To prevent remote access, set connector-host to 127.0.0.1 (localhost). rmi-registry-port The port that the JMX RMI connector server binds to. If not set, the port is always random. Set this property to avoid problems with remote JMX connections tunnelled through a firewall. jmx-realm JMX realm to use for authentication. The default value is activemq to match the JAAS configuration. object-name Object name to expose the remote connector on. The default value is connector:name=rmi . secured Specify whether the connector is secured using SSL. The default value is false . Set the value to true to ensure secure communication. key-store-path Location of the keystore. Required if you have set secured="true" . key-store-password Keystore password. Required if you have set secured="true" . The password can be encrypted. key-store-provider Keystore provider. Required if you have set secured="true" . The default value is JKS . trust-store-path Location of the truststore. Required if you have set secured="true" . trust-store-password Truststore password. Required if you have set secured="true" . The password can be encrypted. trust-store-provider Truststore provider. Required if you have set secured="true" . The default value is JKS password-codec The fully qualified class name of the password codec to use. See the password masking documentation, linked below, for more details on how this works. Set an appropriate value for the endpoint serialization using jdk.serialFilter as described in the Java Platform documentation . Additional resources For more information about encrypted passwords in configuration files, see Encrypting Passwords in Configuration Files . 6.2.3. MBeanServer configuration When the broker runs in standalone mode, it uses the Java Virtual Machine's Platform MBeanServer to register its MBeans. By default, Jolokia is also deployed to allow access to the MBean server using REST. 6.2.4. How JMX is exposed with Jolokia By default, AMQ Broker ships with the Jolokia HTTP agent deployed as a web application. Jolokia is a remote JMX over HTTP bridge that exposes MBeans. Note To use Jolokia, the user must belong to the role defined by the hawtio.role system property in the <broker_instance_dir> /etc/artemis.profile configuration file. By default, this role is amq . Example 6.1. Using Jolokia to query the broker's version This example uses a Jolokia REST URL to find the version of a broker. The Origin flag should specify the domain name or DNS host name for the broker server. In addition, the value you specify for Origin must correspond to an entry for <allow-origin> in your Jolokia Cross-Origin Resource Sharing (CORS) specification. USD curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\"0.0.0.0\"/Version -H "Origin: mydomain.com" {"request":{"mbean":"org.apache.activemq.artemis:broker=\"0.0.0.0\"","attribute":"Version","type":"read"},"value":"2.4.0.amq-710002-redhat-1","timestamp":1527105236,"status":200} Additional resources For more information on using a JMX-HTTP bridge, see the Jolokia documentation . For more information on assigning a user to a role, see Adding Users . For more information on specifying Jolokia Cross-Origin Resource Sharing (CORS), see section 4.1.5 of Security . 6.2.5. Subscribing to JMX management notifications If JMX is enabled in your environment, you can subscribe to management notifications. Procedure Subscribe to ObjectName org.apache.activemq.artemis:broker=" <broker-name> " . Additional resources For more information about management notifications, see Section 6.5, "Management notifications" . 6.3. Managing AMQ Broker using the JMS API The Java Message Service (JMS) API allows you to create, send, receive, and read messages. You can use JMS and the AMQ JMS client to manage brokers. 6.3.1. Configuring broker management using JMS messages and the AMQ JMS Client To use JMS to manage a broker, you must first configure the broker's management address with the manage permission. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the <management-address> element, and specify a management address. By default, the management address is queue.activemq.management . You only need to specify a different address if you do not want to use the default. <management-address>my.management.address</management-address> Provide the management address with the manage user permission type. This permission type enables the management address to receive and handle management messages. <security-setting-match="queue.activemq.management"> <permission-type="manage" roles="admin"/> </security-setting> 6.3.2. Managing brokers using the JMS API and AMQ JMS Client To invoke management operations using JMS messages, the AMQ JMS client must instantiate the special management queue. Procedure Create a QueueRequestor to send messages to the management address and receive replies. Create a Message . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to fill the message with the management properties. Send the message using the QueueRequestor . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to retrieve the operation result from the management reply. Example 6.2. Viewing the number of messages in a queue This example shows how to use the JMS API to view the number of messages in the JMS queue exampleQueue : Queue managementQueue = ActiveMQJMSClient.createQueue("activemq.management"); QueueSession session = ... QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, "queue.exampleQueue", "messageCount"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println("There are " + count + " messages in exampleQueue"); 6.4. Management operations Whether you are using JMX or JMS messages to manage AMQ Broker, you can use the same API management operations. Using the management API, you can manage brokers, addresses, and queues. 6.4.1. Broker management operations You can use the management API to manage your brokers. Listing, creating, deploying, and destroying queues A list of deployed queues can be retrieved using the getQueueNames() method. Queues can be created or destroyed using the management operations createQueue() , deployQueue() , or destroyQueue() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). createQueue will fail if the queue already exists while deployQueue will do nothing. Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. Listing and closing remote connections Retrieve a client's remote addresses by using listRemoteAddresses() . It is also possible to close the connections associated with a remote address using the closeConnectionsForAddress() method. Alternatively, list connection IDs using listConnectionIDs() and list all the sessions for a given connection ID using listSessions() . Managing transactions In case of a broker crash, when the broker restarts, some transactions might require manual intervention. Use the the following methods to help resolve issues you encounter. List the transactions which are in the prepared states (the transactions are represented as opaque Base64 Strings) using the listPreparedTransactions() method lists. Commit or rollback a given prepared transaction using commitPreparedTransaction() or rollbackPreparedTransaction() to resolve heuristic transactions. List heuristically completed transactions using the listHeuristicCommittedTransactions() and listHeuristicRolledBackTransactions methods. Enabling and resetting message counters Enable and disable message counters using the enableMessageCounters() or disableMessageCounters() method. Reset message counters by using the resetAllMessageCounters() and resetAllMessageCounterHistories() methods. Retrieving broker configuration and attributes The ActiveMQServerControl exposes the broker's configuration through all its attributes (for example, getVersion() method to retrieve the broker's version, and so on). Listing, creating, and destroying Core Bridge and diverts List deployed Core Bridge and diverts using the getBridgeNames() and getDivertNames() methods respectively. Create or destroy using bridges and diverts using createBridge() and destroyBridge() or createDivert() and destroyDivert() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). Stopping the broker and forcing failover to occur with any currently attached clients Use the forceFailover() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ) Note Because this method actually stops the broker, you will likely receive an error. The exact error depends on the management service you used to call the method. 6.4.2. Address management operations You can use the management API to manage addresses. Manage addresses using the AddressControl class with ObjectName org.apache.activemq.artemis:broker=" <broker-name> ", component=addresses,address=" <address-name> " or the resource name address. <address-name> . Modify roles and permissions for an address using the addRole() or removeRole() methods. You can list all the roles associated with the queue with the getRoles() method. 6.4.3. Queue management operations You can use the management API to manage queues. The core management API deals with queues. The QueueControl class defines the queue management operations (with the ObjectName , org.apache.activemq.artemis:broker=" <broker-name> ",component=addresses,address=" <bound-address> ",subcomponent=queues,routing-type=" <routing-type> ",queue=" <queue-name> " or the resource name queue. <queue-name> ). Most of the management operations on queues take either a single message ID (for example, to remove a single message) or a filter (for example, to expire all messages with a given property). Expiring, sending to a dead letter address, and moving messages Expire messages from a queue using the expireMessages() method. If an expiry address is defined, messages are sent to this address, otherwise they are discarded. You can define the expiry address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Send messages to a dead letter address using the sendMessagesToDeadLetterAddress() method. This method returns the number of messages sent to the dead letter address. If a dead letter address is defined, messages are sent to this address, otherwise they are removed from the queue and discarded. You can define the dead letter address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Move messages from one queue to another using the moveMessages() method. Listing and removing messages List messages from a queue using the listMessages() method. It will return an array of Map , one Map for each message. Remove messages from a queue using the removeMessages() method, which returns a boolean for the single message ID variant or the number of removed messages for the filter variant. This method takes a filter argument to remove only filtered messages. Setting the filter to an empty string will in effect remove all messages. Counting messages The number of messages in a queue is returned by the getMessageCount() method. Alternatively, the countMessages() will return the number of messages in the queue which match a given filter. Changing message priority The message priority can be changed by using the changeMessagesPriority() method which returns a boolean for the single message ID variant or the number of updated messages for the filter variant. Message counters Message counters can be listed for a queue with the listMessageCounter() and listMessageCounterHistory() methods (see Section 6.6, "Using message counters" ). The message counters can also be reset for a single queue using the resetMessageCounter() method. Retrieving the queue attributes The QueueControl exposes queue settings through its attributes (for example, getFilter() to retrieve the queue's filter if it was created with one, isDurable() to know whether the queue is durable, and so on). Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. 6.4.4. Remote resource management operations You can use the management API to start and stop a broker's remote resources (acceptors, diverts, bridges, and so on) so that the broker can be taken offline for a given period of time without stopping completely. Acceptors Start or stop an acceptor using the start() or. stop() method on the AcceptorControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=acceptors,name=" <acceptor-name> " or the resource name acceptor. <address-name> ). Acceptor parameters can be retrieved using the AcceptorControl attributes. See Network Connections: Acceptors and Connectors for more information about Acceptors. Diverts Start or stop a divert using the start() or stop() method on the DivertControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=diverts,name=" <divert-name> " or the resource name divert. <divert-name> ). Divert parameters can be retrieved using the DivertControl attributes. Bridges Start or stop a bridge using the start() (resp. stop() ) method on the BridgeControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=bridge,name=" <bridge-name> " or the resource name bridge. <bridge-name> ). Bridge parameters can be retrieved using the BridgeControl attributes. Broadcast groups Start or stop a broadcast group using the start() or stop() method on the BroadcastGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=broadcast-group,name=" <broadcast-group-name> " or the resource name broadcastgroup. <broadcast-group-name> ). Broadcast group parameters can be retrieved using the BroadcastGroupControl attributes. See Broker discovery methods for more information. Discovery groups Start or stop a discovery group using the start() or stop() method on the DiscoveryGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=discovery-group,name=" <discovery-group-name> " or the resource name discovery. <discovery-group-name> ). Discovery groups parameters can be retrieved using the DiscoveryGroupControl attributes. See Broker discovery methods for more information. Cluster connections Start or stop a cluster connection using the start() or stop() method on the ClusterConnectionControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=cluster-connection,name=" <cluster-connection-name> " or the resource name clusterconnection. <cluster-connection-name> ). Cluster connection parameters can be retrieved using the ClusterConnectionControl attributes. See Creating a broker cluster for more information. 6.5. Management notifications Below is a list of all the different kinds of notifications as well as which headers are on the messages. Every notification has a _AMQ_NotifType (value noted in parentheses) and _AMQ_NotifTimestamp header. The time stamp is the unformatted result of a call to java.lang.System.currentTimeMillis() . Notification type Headers BINDING_ADDED (0) _AMQ_Binding_Type _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString BINDING_REMOVED (1) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString CONSUMER_CREATED (2) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString CONSUMER_CLOSED (3) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString SECURITY_AUTHENTICATION_VIOLATION (6) _AMQ_User SECURITY_PERMISSION_VIOLATION (7) _AMQ_Address _AMQ_CheckType _AMQ_User DISCOVERY_GROUP_STARTED (8) name DISCOVERY_GROUP_STOPPED (9) name BROADCAST_GROUP_STARTED (10) name BROADCAST_GROUP_STOPPED (11) name BRIDGE_STARTED (12) name BRIDGE_STOPPED (13) name CLUSTER_CONNECTION_STARTED (14) name CLUSTER_CONNECTION_STOPPED (15) name ACCEPTOR_STARTED (16) factory id ACCEPTOR_STOPPED (17) factory id PROPOSAL (18) _JBM_ProposalGroupId _JBM_ProposalValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance PROPOSAL_RESPONSE (19) _JBM_ProposalGroupId _JBM_ProposalValue _JBM_ProposalAltValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance CONSUMER_SLOW (21) _AMQ_Address _AMQ_ConsumerCount _AMQ_RemoteAddress _AMQ_ConnectionName _AMQ_ConsumerName _AMQ_SessionName 6.6. Using message counters You use message counters to obtain information about queues over time. This helps you to identify trends that would otherwise be difficult to see. For example, you could use message counters to determine how a particular queue is being used over time. You could also attempt to obtain this information by using the management API to query the number of messages in the queue at regular intervals, but this would not show how the queue is actually being used. The number of messages in a queue can remain constant because no clients are sending or receiving messages on it, or because the number of messages sent to the queue is equal to the number of messages consumed from it. In both of these cases, the number of messages in the queue remains the same even though it is being used in very different ways. 6.6.1. Types of message counters Message counters provide additional information about queues on a broker. count The total number of messages added to the queue since the broker was started. countDelta The number of messages added to the queue since the last message counter update. lastAckTimestamp The time stamp of the last time a message from the queue was acknowledged. lastAddTimestamp The time stamp of the last time a message was added to the queue. messageCount The current number of messages in the queue. messageCountDelta The overall number of messages added/removed from the queue since the last message counter update. For example, if messageCountDelta is -10 , then 10 messages overall have been removed from the queue. udpateTimestamp The time stamp of the last message counter update. Note You can combine message counters to determine other meaningful data as well. For example, to know specifically how many messages were consumed from the queue since the last update, you would subtract the messageCountDelta from countDelta . 6.6.2. Enabling message counters Message counters can have a small impact on the broker's memory; therefore, they are disabled by default. To use message counters, you must first enable them. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Enable message counters. <message-counter-enabled>true</message-counter-enabled> Set the message counter history and sampling period. <message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period> message-counter-max-day-history The number of days the broker should store queue metrics. The default is 10 days. message-counter-sample-period How often (in milliseconds) the broker should sample its queues to collect metrics. The default is 10000 milliseconds. 6.6.3. Retrieving message counters You can use the management API to retrieve message counters. Prerequisites Message counters must be enabled on the broker. For more information, see Section 6.6.2, "Enabling message counters" . Procedure Use the management API to retrieve message counters. // Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = ... JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format("%s message(s) in the queue (since last sample: %s)\n", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta()); Additional resources For more information about message counters, see Section 6.4.3, "Queue management operations" . | [
"org.apache.activemq.artemis:broker=\"__BROKER_NAME__\",component=addresses,address=\"exampleQueue\",subcomponent=queues,routingtype=\"anycast\",queue=\"exampleQueue\"",
"org.apache.activemq.artemis.api.management.QueueControl",
"<jmx-management-enabled>true</jmx-management-enabled>",
"<jmx-domain>my.org.apache.activemq</jmx-domain>",
"<connector connector-port=\"1099\"/>",
"service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi",
"curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"/Version -H \"Origin: mydomain.com\" {\"request\":{\"mbean\":\"org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"\",\"attribute\":\"Version\",\"type\":\"read\"},\"value\":\"2.4.0.amq-710002-redhat-1\",\"timestamp\":1527105236,\"status\":200}",
"<management-address>my.management.address</management-address>",
"<security-setting-match=\"queue.activemq.management\"> <permission-type=\"manage\" roles=\"admin\"/> </security-setting>",
"Queue managementQueue = ActiveMQJMSClient.createQueue(\"activemq.management\"); QueueSession session = QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, \"queue.exampleQueue\", \"messageCount\"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println(\"There are \" + count + \" messages in exampleQueue\");",
"<message-counter-enabled>true</message-counter-enabled>",
"<message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period>",
"// Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format(\"%s message(s) in the queue (since last sample: %s)\\n\", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta());"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/managing_amq_broker/management-api-managing |
6.4.2. Useful Websites | 6.4.2. Useful Websites http://www.bergen.org/ATC/Course/InfoTech/passwords.html - A good example of a document conveying information about password security to an organization's users. http://www.crypticide.org/users/alecm/ - Homepage of the author of one of the most popular password-cracking programs (Crack). You can download Crack from this page and see how many of your users have weak passwords. http://www.linuxpowered.com/html/editorials/file.html - a good overview of Linux file permissions. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-acctsgrps-addres-web |
Chapter 5. Configuring Ansible Automation Platform Central Authentication Generic OIDC settings and Red Hat SSO/keycloak for Red Hat SSO and Ansible Automation Platform | Chapter 5. Configuring Ansible Automation Platform Central Authentication Generic OIDC settings and Red Hat SSO/keycloak for Red Hat SSO and Ansible Automation Platform Ansible Automation Platform Central Authentication allows for the setting of generic OIDC settings and Red Hat SSO/keycloak for Red Hat SSO and Ansible Automation Platform. 5.1. Prerequisites You are able to log in as an admin user. 5.2. Configuring Central Authentication Generic OIDC settings Procedure Log in to RH-SSO as admin. Note If you have an existing realm you may go to step 6. Add Realm. Enter Name and click Create . Click the Clients tab. Enter name and click Create . From the navigation panel, select Client Protocol openid-connect . From the navigation panel, select Access Type confidential . In the Root URL field, enter your Ansible Automation Platform server IP or hostname. In the Valid Redirect field, enter your Ansible Automation Platform server IP or hostname. If not in production, set to *. In the Web origins field, enter your Ansible Automation Platform server IP or hostname. If not in production, set to *. Click the Credentials tab. Note Keep track of the Secret to be used later. Log in to Ansible Automation Platform Controller as admin. From the navigation panel, select Settings . Select Generic OIDC settings from the list of Authentication options. Click Edit . In the OIDC Key field, enter the name of your client from step 5. In the OIDC Secret field, enter the secret saved from step 8. In the OIDC Provider URL field, enter your keycloak server URL and port. Click Save . OIDC should appear as an option for login. Click Sign in with OIDC and it will redirect you to the SSO server for login and redirection back to Ansible Automation Platform. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_and_configuring_central_authentication_for_the_ansible_automation_platform/configuring-central-auth-generic-oidc-settings |
Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] | Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] Description VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. status object status represents the current information of a snapshot. 13.1.1. .spec Description spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. Type object Required deletionPolicy driver source volumeSnapshotRef Property Type Description deletionPolicy string deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the "DeletionPolicy" field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required. driver string driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required. source object source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. sourceVolumeMode string SourceVolumeMode is the mode of the volume whose snapshot is taken. Can be either "Filesystem" or "Block". If not specified, it indicates the source volume's mode is unknown. This field is immutable. This field is an alpha field. volumeSnapshotClassName string name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation. volumeSnapshotRef object volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. 13.1.2. .spec.source Description source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. Type object Property Type Description snapshotHandle string snapshotHandle specifies the CSI "snapshot_id" of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable. volumeHandle string volumeHandle specifies the CSI "volume_id" of the volume from which a snapshot should be dynamically taken from. This field is immutable. 13.1.3. .spec.volumeSnapshotRef Description volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 13.1.4. .status Description status represents the current information of a snapshot. Type object Property Type Description creationTime integer creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command date +%s%N returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. error object error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. readyToUse boolean readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. snapshotHandle string snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. 13.1.5. .status.error Description error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 13.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents DELETE : delete collection of VolumeSnapshotContent GET : list objects of kind VolumeSnapshotContent POST : create a VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} DELETE : delete a VolumeSnapshotContent GET : read the specified VolumeSnapshotContent PATCH : partially update the specified VolumeSnapshotContent PUT : replace the specified VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status GET : read status of the specified VolumeSnapshotContent PATCH : partially update status of the specified VolumeSnapshotContent PUT : replace status of the specified VolumeSnapshotContent 13.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents Table 13.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of VolumeSnapshotContent Table 13.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshotContent Table 13.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.5. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContentList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshotContent Table 13.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.7. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.8. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 202 - Accepted VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.2. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} Table 13.9. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent Table 13.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a VolumeSnapshotContent Table 13.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 13.12. Body parameters Parameter Type Description body DeleteOptions schema Table 13.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshotContent Table 13.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.15. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshotContent Table 13.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.17. Body parameters Parameter Type Description body Patch schema Table 13.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshotContent Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.3. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status Table 13.22. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent Table 13.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified VolumeSnapshotContent Table 13.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.25. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshotContent Table 13.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.27. Body parameters Parameter Type Description body Patch schema Table 13.28. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshotContent Table 13.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.30. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.31. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage_apis/volumesnapshotcontent-snapshot-storage-k8s-io-v1 |
Administering Red Hat Satellite | Administering Red Hat Satellite Red Hat Satellite 6.15 Administer users and permissions, manage organizations and locations, back up and restore Satellite, maintain Satellite, and more Red Hat Satellite Documentation Team [email protected] | [
"hostname -f",
"https:// satellite.example.com /pub",
"scp /var/www/html/pub/katello-server-ca.crt username@hostname:remotefile",
"https:// satellite.example.com /",
"kinit idm_user",
"hammer auth login negotiate",
"kdestroy -A",
"hammer host list",
"kinit idm_user Password for idm_user@ EXAMPLE.COM :",
"google-chrome --auth-server-whitelist=\"*. example.com \" --auth-negotiate-delegate-whitelist=\"*. example.com \"",
"kinit idm_user Password for idm_user@_EXAMPLE.COM :",
"foreman-rake permissions:reset Reset to user: admin, password: qwJxBptxb7Gfcjj5",
"vi ~/.hammer/cli.modules.d/foreman.yml",
"foreman-rake permissions:reset password= new_password",
"vi ~/.hammer/cli.modules.d/foreman.yml",
"satellite-maintain service list",
"satellite-maintain service status",
"satellite-maintain service stop",
"satellite-maintain service start",
"satellite-maintain service restart",
"satellite-maintain backup offline --skip-pulp-content --assumeyes /var/backup",
"satellite-maintain service stop satellite-maintain service disable",
"rsync --archive --partial --progress --compress /var/lib/pulp/ target_server.example.com:/var/lib/pulp/",
"du -sh /var/lib/pulp/",
"satellite-maintain backup offline --assumeyes /var/backup",
"satellite-maintain service stop satellite-maintain service disable",
"subscription-manager register your_customer_portal_credentials subscription-manager repos --disable=* subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms dnf module enable satellite-maintenance:el8",
"dnf install satellite-clone",
"satellite-clone",
"cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original",
"satellite-installer --tuning medium",
"satellite-maintain service status --only postgresql",
"subscription-manager repos --disable '*' subscription-manager repos --enable=satellite-6.15-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"dnf module enable satellite:el8",
"dnf install postgresql-server postgresql-evr postgresql-contrib",
"postgresql-setup initdb",
"vi /var/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"vi /var/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /32 md5",
"systemctl enable --now postgresql",
"firewall-cmd --add-service=postgresql",
"firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".",
"pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"satellite-maintain service stop",
"systemctl start postgresql",
"satellite-maintain backup online --preserve-directory --skip-pulp-content /var/migration_backup",
"PGPASSWORD=' Foreman_Password ' pg_restore -h postgres.example.com -U foreman -d foreman < /var/migration_backup/foreman.dump PGPASSWORD=' Candlepin_Password ' pg_restore -h postgres.example.com -U candlepin -d candlepin < /var/migration_backup/candlepin.dump PGPASSWORD=' Pulpcore_Password ' pg_restore -h postgres.example.com -U pulp -d pulpcore < /var/migration_backup/pulpcore.dump",
"satellite-installer --foreman-db-database foreman --foreman-db-host postgres.example.com --foreman-db-manage false --foreman-db-password Foreman_Password --foreman-db-username foreman --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-proxy-content-pulpcore-postgresql-user pulp --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-password Candlepin_Password --katello-candlepin-db-user candlepin --katello-candlepin-manage-db false",
"satellite-maintain packages install ansible-collection-redhat-satellite",
"ansible-doc -l redhat.satellite",
"ansible-doc redhat.satellite.activation_key",
"hammer organization create --name \" My_Organization \" --label \" My_Organization_Label \" --description \" My_Organization_Description \"",
"hammer organization update --name \" My_Organization \" --compute-resource-ids 1",
"vi 'Default Organization-key-cert.pem'",
"openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in cert.pem -inkey key.pem -out My_Organization_Label .pfx -name My_Organization",
"https:// satellite.example.com /pulp/content/",
"curl -k --cert cert.pem --key key.pem https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/content/dist/rhel/server/7/7Server/x86_64/os/",
"hammer organization list",
"hammer organization delete --id Organization_ID",
"hammer location create --description \" My_Location_Description \" --name \" My_Location \" --parent-id \" My_Location_Parent_ID \"",
"ORG=\" Example Organization \" LOCATIONS=\" London Munich Boston \" for LOC in USD{LOCATIONS} do hammer location create --name \"USD{LOC}\" hammer location add-organization --name \"USD{LOC}\" --organization \"USD{ORG}\" done",
"hammer host list --location \" My_Location \"",
"hammer location list",
"hammer location delete --id Location ID",
"hammer user create --auth-source-id My_Authentication_Source --login My_User_Name --mail My_User_Mail --organization-ids My_Organization_ID_1 , My_Organization_ID_2 --password My_User_Password",
"hammer user add-role --id user_id --role role_name",
"openssl rand -hex 32",
"hammer user ssh-keys add --user-id user_id --name key_name --key-file ~/.ssh/id_rsa.pub",
"hammer user ssh-keys add --user-id user_id --name key_name --key ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNtYAAABBBHHS2KmNyIYa27Qaa7EHp+2l99ucGStx4P77e03ZvE3yVRJEFikpoP3MJtYYfIe8k 1/46MTIZo9CPTX4CYUHeN8= host@user",
"hammer user ssh-keys delete --id key_id --user-id user_id",
"hammer user ssh-keys info --id key_id --user-id user_id",
"hammer user ssh-keys list --user-id user_id",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{\"satellite_version\":\"6.15.0\",\"result\":\"ok\",\"status\":200,\"version\":\"3.5.1.10\",\"api_version\":2}",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{ \"error\": {\"message\":\"Unable to authenticate user My_Username \"} }",
"hammer user-group create --name My_User_Group_Name --role-ids My_Role_ID_1 , My_Role_ID_2 --user-ids My_User_ID_1 , My_User_ID_2",
"hammer role create --name My_Role_Name",
"hammer filter available-permissions",
"hammer filter create --permission-ids My_Permission_ID_1 , My_Permission_ID_2 --role My_Role_Name",
"foreman-rake console",
"f = File.open('/tmp/table.html', 'w') result = Foreman::AccessControl.permissions {|a,b| a.security_block <=> b.security_block}.collect do |p| actions = p.actions.collect { |a| \"<li>#{a}</li>\" } \"<tr><td>#{p.name}</td><td><ul>#{actions.join('')}</ul></td><td>#{p.resource_type}</td></tr>\" end.join(\"\\n\") f.write(result)",
"<table border=\"1\"><tr><td>Permission name</td><td>Actions</td><td>Resource type</td></tr>",
"</table>",
"field_name operator value",
"hammer filter create --permission-ids 91 --search \"name ~ ccv*\" --role qa-user",
"hostgroup = host-editors",
"name ^ (XXXX, Yyyy, zzzz)",
"Dev",
"postqueue: warning: Mail system is down -- accessing queue directly -Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient------- BE68482A783 1922 Thu Oct 3 05:13:36 [email protected]",
"systemctl start postfix",
"foreman-rake reports:_My_Frequency_",
"satellite-maintain service stop",
"satellite-maintain service start",
"du -sh /var/lib/pgsql/data /var/lib/pulp 100G /var/lib/pgsql/data 100G /var/lib/pulp du -csh /var/lib/tftpboot /etc /root/ssl-build /var/www/html/pub /opt/puppetlabs 16M /var/lib/tftpboot 37M /etc 900K /root/ssl-build 100K /var/www/html/pub 2M /opt/puppetlabs 942M total",
"satellite-maintain backup offline --help",
"satellite-maintain backup online --help",
"satellite-maintain backup snapshot --help",
"satellite-maintain backup offline /var/satellite-backup",
"satellite-maintain backup offline /var/foreman-proxy-backup",
"satellite-maintain backup offline --skip-pulp-content /var/backup_directory",
"satellite-maintain backup offline /var/backup_directory",
"satellite-maintain backup offline --incremental /var/backup_directory/full_backup /var/backup_directory",
"satellite-maintain backup offline --incremental /var/backup_directory/first_incremental_backup /var/backup_directory",
"satellite-maintain backup offline --incremental /var/backup_directory/full_backup /var/backup_directory",
"#!/bin/bash -e PATH=/sbin:/bin:/usr/sbin:/usr/bin DESTINATION=/var/backup_directory if [[ USD(date +%w) == 0 ]]; then satellite-maintain backup offline --assumeyes USDDESTINATION else LAST=USD(ls -td -- USDDESTINATION/*/ | head -n 1) satellite-maintain backup offline --assumeyes --incremental \"USDLAST\" USDDESTINATION fi exit 0",
"satellite-maintain backup online /var/backup_directory",
"satellite-maintain backup snapshot -h",
"satellite-maintain backup snapshot /var/backup_directory",
"du -sh /var/backup_directory",
"df -h /var/backup_directory",
"restorecon -Rv /",
"satellite-maintain restore /var/backup_directory",
"satellite-maintain restore /var/backup_directory /FIRST_INCREMENTAL satellite-maintain restore /var/backup_directory /SECOND_INCREMENTAL",
"satellite-change-hostname new-satellite --username My_Username --password My_Password",
"satellite-change-hostname new-satellite --username My_Username --password My_Password --custom-cert \"/root/ownca/test.com/test.com.crt\" --custom-key \"/root/ownca/test.com/test.com.key\"",
"satellite-installer --foreman-proxy-foreman-base-url https:// new-satellite.example.com --foreman-proxy-trusted-hosts new-satellite.example.com",
"hammer capsule list",
"hammer capsule content synchronize --id My_capsule_ID",
"capsule-certs-generate --certs-tar /root/ new-capsule.example.com-certs.tar --foreman-proxy-fqdn new-capsule.example.com",
"scp /root/ new-capsule.example.com-certs.tar root@ capsule.example.com :",
"satellite-change-hostname new-capsule.example.com --certs-tar /root/ new-capsule.example.com-certs.tar --password My_Password --username My_Username",
"dnf remove katello-ca-consumer* dnf install http:// new-capsule.example.com /pub/katello-ca-consumer-latest.noarch.rpm subscription-manager register --environment=\" My_Lifecycle_Environment \" --force --org=\" My_Organization \" subscription-manager refresh",
"foreman-rake audits:expire days= Number_Of_Days",
"foreman-rake audits:anonymize days=7",
"foreman-rake reports:expire days=7",
"satellite-installer --foreman-plugin-tasks-cron-line \"00 15 * * *\"",
"satellite-installer --foreman-plugin-tasks-automatic-cleanup false",
"satellite-installer --foreman-plugin-tasks-automatic-cleanup true",
"foreman-rake foreman_tasks:cleanup TASK_SEARCH='label = Actions::Katello::Repository::Sync' STATES='stopped'",
"ssh [email protected]",
"hammer task info --id My_Task_ID",
"foreman-rake foreman_tasks:cleanup TASK_SEARCH=\"id= My_Task_ID \"",
"hammer task info --id My_Task_ID",
"foreman-rake katello:delete_orphaned_content RAILS_ENV=production",
"satellite-maintain service stop",
"satellite-maintain service start",
"satellite-maintain packages install package_1 package_2",
"satellite-maintain packages check-update",
"satellite-maintain packages update",
"satellite-maintain packages update package_1 package_2",
"satellite-maintain service stop --exclude postgresql",
"su - postgres -c 'vacuumdb --full --all'",
"satellite-maintain service start",
"foreman-rake katello:delete_orphaned_content",
"katello-certs-check -t satellite -b /root/ satellite_cert/ca_cert_bundle.pem -c /root/ satellite_cert/satellite_cert.pem -k /root/ satellite_cert/satellite_cert_key.pem",
"satellite-installer --scenario satellite --certs-server-cert \"/root/ satellite_cert/satellite_cert.pem \" --certs-server-key \"/root/ satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \"/root/ satellite_cert/ca_cert_bundle.pem \" --certs-update-server --certs-update-server-ca",
"katello-certs-check -t capsule -b /root/ capsule_cert/ca_cert_bundle.pem -c /root/ capsule_cert/capsule_cert.pem -k /root/ capsule_cert/capsule_cert_key.pem",
"capsule-certs-generate --certs-tar \" /root/My_Certificates/capsule.example.com-certs.tar \" --certs-update-server --foreman-proxy-fqdn \" capsule.example.com \" --server-ca-cert \" /root/My_Certificates/ca_cert_bundle.pem \" --server-cert \" /root/My_Certificates/capsule_cert.pem \" --server-key \" /root/My_Certificates/capsule_cert_key.pem \"",
"scp /root/My_Certificates/capsule.example.com-certs.tar [email protected] :",
"satellite-installer --scenario capsule --certs-tar-file \" /root/My_Certificates/capsule.example.com-certs.tar \" --certs-update-server --foreman-proxy-foreman-base-url \"https:// satellite.example.com \" --foreman-proxy-register-in-foreman \"true\"",
"satellite-installer --foreman-logging-level debug",
"satellite-installer --reset-foreman-logging-level",
"satellite-installer --full-help | grep logging",
":log_level: 'debug'",
"satellite-installer --foreman-proxy-log-level DEBUG",
"satellite-installer --reset-foreman-proxy-log-level",
"satellite-installer --katello-candlepin-loggers log4j.logger.org.candlepin:DEBUG",
"satellite-installer --katello-candlepin-loggers log4j.logger.org.candlepin:DEBUG --katello-candlepin-loggers log4j.logger.org.candlepin.resource.ConsumerResource:WARN --katello-candlepin-loggers log4j.logger.org.candlepin.resource.HypervisorResource:WARN",
"satellite-installer --reset-katello-candlepin-loggers",
"loglevel debug",
"systemctl restart redis",
"satellite-installer --verbose-log-level debug",
"LOGGING = {\"dynaconf_merge\": True, \"loggers\": {'': {'handlers': ['console'], 'level': 'DEBUG'}}}",
"systemctl restart pulpcore-api pulpcore-content pulpcore-resource-manager pulpcore-worker@1 pulpcore-worker@2 redis",
"satellite-installer --puppet-agent-additional-settings log_level:debug",
"satellite-installer --puppet-server-additional-settings log_level:debug",
"satellite-maintain service restart --only puppetserver",
"hammer admin logging --list",
"hammer admin logging --all --level-debug satellite-maintain service restart",
"hammer admin logging --all --level-production satellite-maintain service restart",
"hammer admin logging --components My_Component --level-debug satellite-maintain service restart",
"hammer admin logging --help",
"satellite-installer --foreman-logging-type journald --foreman-proxy-log JOURNAL",
"satellite-installer --reset-foreman-logging-type --reset-foreman-proxy-log",
"satellite-installer --foreman-logging-layout json --foreman-logging-type file",
"cat /var/log/foreman/production.log | jq",
"satellite-installer --foreman-loggers ldap:true --foreman-loggers sql:true",
"satellite-installer --reset-foreman-loggers",
"hammer ping",
"satellite-maintain service status",
"satellite-maintain health check",
"satellite-maintain service restart",
"awk '/add_loggers/,/^USD/' /usr/share/foreman/config/application.rb",
"There was an issue with the backend service candlepin: Connection refused - connect(2).",
"foreman-rake audits:list_attributes",
"satellite-installer --enable-foreman-plugin-webhooks",
"satellite-installer --enable-foreman-cli-webhooks",
"{ \"text\": \"job invocation <%= @object.job_invocation_id %> finished with result <%= @object.task.result %>\" }",
"{ \"text\": \"user with login <%= @object.login %> and email <%= @object.mail %> created\" }",
"satellite-installer --enable-foreman-proxy-plugin-shellhooks",
"{ \"X-Shellhook-Arg-1\": \" VALUE \", \"X-Shellhook-Arg-2\": \" VALUE \" }",
"{ \"X-Shellhook-Arg-1\": \"<%= @object.content_view_version_id %>\", \"X-Shellhook-Arg-2\": \"<%= @object.content_view_name %>\" }",
"\"X-Shellhook-Arg-1: VALUE \" \"X-Shellhook-Arg-2: VALUE \"",
"curl -sX POST -H 'Content-Type: text/plain' -H \"X-Shellhook-Arg-1: Version 1.0\" -H \"X-Shellhook-Arg-2: My content view\" --data \"\" https://capsule.example.com:9090/shellhook/My_Script",
"#!/bin/sh # Prints all arguments to stderr # echo \"USD@\" >&2",
"https:// capsule.example.com :9090/shellhook/print_args",
"{ \"X-Shellhook-Arg-1\": \"Hello\", \"X-Shellhook-Arg-2\": \"World!\" }",
"tail /var/log/foreman-proxy/proxy.log",
"[I] Started POST /shellhook/print_args [I] Finished POST /shellhook/print_args with 200 (0.33 ms) [I] [3520] Started task /var/lib/foreman-proxy/shellhooks/print_args\\ Hello\\ World\\! [W] [3520] Hello World!",
"parameter operator value"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/administering_red_hat_satellite/index |
Chapter 45. keypair | Chapter 45. keypair This chapter describes the commands under the keypair command. 45.1. keypair create Create new public or private key for server ssh access Usage: Table 45.1. Positional Arguments Value Summary <name> New public or private key name Table 45.2. Optional Arguments Value Summary -h, --help Show this help message and exit --public-key <file> Filename for public key to add. if not used, creates a private key. --private-key <file> Filename for private key to save. if not used, print private key in console. Table 45.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 45.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 45.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 45.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 45.2. keypair delete Delete public or private key(s) Usage: Table 45.7. Positional Arguments Value Summary <key> Name of key(s) to delete (name only) Table 45.8. Optional Arguments Value Summary -h, --help Show this help message and exit 45.3. keypair list List key fingerprints Usage: Table 45.9. Optional Arguments Value Summary -h, --help Show this help message and exit Table 45.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 45.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 45.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 45.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 45.4. keypair show Display key details Usage: Table 45.14. Positional Arguments Value Summary <key> Public or private key to display (name only) Table 45.15. Optional Arguments Value Summary -h, --help Show this help message and exit --public-key Show only bare public key paired with the generated key Table 45.16. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 45.17. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 45.18. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 45.19. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack keypair create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--public-key <file> | --private-key <file>] <name>",
"openstack keypair delete [-h] <key> [<key> ...]",
"openstack keypair list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]",
"openstack keypair show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--public-key] <key>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/keypair |
Chapter 1. Build controller observability | Chapter 1. Build controller observability Builds exposes several metrics to help you monitor the performance and functioning of your build resources. The build controller metrics are exposed on the port 8383 . 1.1. Build controller metrics You can check the following build controller metrics for monitoring purposes: Table 1.1. Build controller metrics Name Type Description Labels Status build_builds_registered_total Counter The number of total registered builds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> experimental build_buildruns_completed_total Counter The number of total completed build runs. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_establish_duration_seconds Histogram The build run establish duration in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_completion_duration_seconds Histogram The build run completion duration in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_rampup_duration_seconds Histogram The build run ramp-up duration in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_taskrun_rampup_duration_seconds Histogram The build run ramp-up duration for a task run in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_taskrun_pod_rampup_duration_seconds Histogram The build run ramp-up duration for a task run pod in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental 1.1.1. Histogram metrics To use custom buckets for the build controller, you must set the environment variable for a particular histogram metric. The following table shows the environment variables for all histogram metrics: Table 1.2. Histogram metrics Metric Environment variable Default build_buildrun_establish_duration_seconds PROMETHEUS_BR_EST_DUR_BUCKETS 0,1,2,3,5,7,10,15,20,30 build_buildrun_completion_duration_seconds PROMETHEUS_BR_COMP_DUR_BUCKETS 50,100,150,200,250,300,350,400,450,500 build_buildrun_rampup_duration_seconds PROMETHEUS_BR_RAMPUP_DUR_BUCKETS 0,1,2,3,4,5,6,7,8,9,10 build_buildrun_taskrun_rampup_duration_seconds PROMETHEUS_BR_RAMPUP_DUR_BUCKETS 0,1,2,3,4,5,6,7,8,9,10 build_buildrun_taskrun_pod_rampup_duration_seconds PROMETHEUS_BR_RAMPUP_DUR_BUCKETS 0,1,2,3,4,5,6,7,8,9,10 | null | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1/html/observability/build-controller-observability |
4.8. at | 4.8. at 4.8.1. RHBA-2012:0068 - at bug fix update An updated at package that fixes one bug is now available for Red Hat Enterprise Linux 6. The "at" package provides the at and "batch" commands, which are used to read commands from standard input or from a specified file. The "at" command allows you to specify that a command will be run at a particular time. The "batch" command will execute commands when the system load levels drop to a particular level. Both commands use the /bin/sh. Bug Fix BZ# 783190 Due to an error in the time-parsing routine, the "at" command incorrectly calculated the year when a job was scheduled by using days on input. For example: "at now + 10 days". This update fixes erroneous grammar so that "at" now schedules jobs correctly. All users of at are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/at |
1.4.2. Direct Routing | 1.4.2. Direct Routing Building an LVS setup that uses direct routing provides increased performance benefits compared to other LVS networking topologies. Direct routing allows the real servers to process and route packets directly to a requesting user rather than passing all outgoing packets through the LVS router. Direct routing reduces the possibility of network performance issues by relegating the job of the LVS router to processing incoming packets only. Figure 1.4. LVS Implemented with Direct Routing In the typical direct routing LVS setup, the LVS router receives incoming server requests through the virtual IP (VIP) and uses a scheduling algorithm to route the request to the real servers. The real server processes the request and sends the response directly to the client, bypassing the LVS routers. This method of routing allows for scalability in that real servers can be added without the added burden on the LVS router to route outgoing packets from the real server to the client, which can become a bottleneck under heavy network load. 1.4.2.1. Direct Routing and the ARP Limitation While there are many advantages to using direct routing in LVS, there are limitations as well. The most common issue with LVS via direct routing is with Address Resolution Protocol ( ARP ). In typical situations, a client on the Internet sends a request to an IP address. Network routers typically send requests to their destination by relating IP addresses to a machine's MAC address with ARP. ARP requests are broadcast to all connected machines on a network, and the machine with the correct IP/MAC address combination receives the packet. The IP/MAC associations are stored in an ARP cache, which is cleared periodically (usually every 15 minutes) and refilled with IP/MAC associations. The issue with ARP requests in a direct routing LVS setup is that because a client request to an IP address must be associated with a MAC address for the request to be handled, the virtual IP address of the LVS system must also be associated to a MAC as well. However, since both the LVS router and the real servers all have the same VIP, the ARP request will be broadcast ed to all the machines associated with the VIP. This can cause several problems, such as the VIP being associated directly to one of the real servers and processing requests directly, bypassing the LVS router completely and defeating the purpose of the LVS setup. To solve this issue, ensure that the incoming requests are always sent to the LVS router rather than one of the real servers. This can be done by using either the arptables_jf or the iptables packet filtering tool for the following reasons: The arptables_jf prevents ARP from associating VIPs with real servers. The iptables method completely sidesteps the ARP problem by not configuring VIPs on real servers in the first place. For more information on using arptables or iptables in a direct routing LVS environment, refer to Section 3.2.1, "Direct Routing and arptables_jf " or Section 3.2.2, "Direct Routing and iptables " . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s2-lvs-directrouting-VSA |
13.3. Configuring Soft-RoCE | 13.3. Configuring Soft-RoCE RoCE can be implemented both in the hardware and in the software. Soft-RoCE is the software implementation of the RDMA transport. Prerequisites Since Red Hat Enterprise Linux 7.4, the Soft-RoCE driver is already merged into the kernel. The user-space driver also is merged into the rdma-core package. Soft-RoCE is also known as RXE. To start, stop and configure RXE, use the rxe_cfg script. To view options for rxe_cfg , enter rxe_cfg help . Procedure 13.2. Configuring Soft-RoCE As the root user, enter the following command to display the current configuration status of RXE: To load the RXE kernel module and start RXE, enter as root : Optionally, to verify that the RXE kernel module is loaded, enter: Before adding a new RXE device over an Ethernet interface, the corresponding interface should be opened and has a valid IP address assigned. To add a new RXE device, for example igb_1 : The rxe0 in the RDEV column indicates that rxe is enabled for the igb_1 device. To verify the status of an RXE device, use the ibv_devices command: Alternatively, enter the ibstat for a detailed status: Removing an RXE device If you want to remove an RXE device, enter: Verifying Connectivity of an RXE device The following examples show how to verify connectivity of an RXE device on the server and client side. Example 13.1. Verifying Connectivity of an RXE device on the Server Side Example 13.2. Verifying Connectivity of an RXE device on the Client Side | [
"~]# rxe_cfg rdma_rxe module not loaded Name Link Driver Speed NMTU IPv4_addr RDEV RMTU igb_1 yes igb mlx4_1 no mlx4_en mlx4_2 no mlx4_en",
"~]# rxe_cfg start Name Link Driver Speed NMTU IPv4_addr RDEV RMTU igb_1 yes igb mlx4_1 no mlx4_en mlx4_2 no mlx4_en",
"~]# lsmod |grep rdma_rxe rdma_rxe 111129 0 ip6_udp_tunnel 12755 1 rdma_rxe udp_tunnel 14423 1 rdma_rxe ib_core 236827 15 rdma_cm,ib_cm,iw_cm,rpcrdma,mlx4_ib,ib_srp,ib_ucm,ib_iser,ib_srpt,ib_umad,ib_uverbs,rdma_rxe,rdma_ucm,ib_ipoib,ib_isert",
"~]# rxe_cfg add igb_1",
"~]# rxe_cfg status Name Link Driver Speed NMTU IPv4_addr RDEV RMTU igb_1 yes igb rxe0 1024 (3) mlx4_1 no mlx4_en mlx4_2 no mlx4_en",
"~]# ibv_devices device node GUID ------ ---------------- mlx4_0 0002c90300b3cff0 rxe0 a2369ffffe018294",
"~]# ibstat rxe0 CA 'rxe0' CA type: Number of ports: 1 Firmware version: Hardware version: Node GUID: 0xa2369ffffe018294 System image GUID: 0x0000000000000000 Port 1: State: Active Physical state: LinkUp Rate: 2.5 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00890000 Port GUID: 0xa2369ffffe018294 Link layer: Ethernet",
"~]# rxe_cfg remove igb_1",
"~]USD ibv_rc_pingpong -d rxe0 -g 0 local address: LID 0x0000, QPN 0x000012, PSN 0xe2965f, GID fe80::290:faff:fe29:486a remote address: LID 0x0000, QPN 0x000011, PSN 0x4bf206, GID fe80::290:faff:fe29:470a 8192000 bytes in 0.05 seconds = 1244.06 Mbit/sec 1000 iters in 0.05 seconds = 52.68 usec/iter",
"~]USD ibv_rc_pingpong -d rxe0 -g 0 172.31.40.4 local address: LID 0x0000, QPN 0x000011, PSN 0x4bf206, GID fe80::290:faff:fe29:470a remote address: LID 0x0000, QPN 0x000012, PSN 0xe2965f, GID fe80::290:faff:fe29:486a 8192000 bytes in 0.05 seconds = 1245.72 Mbit/sec 1000 iters in 0.05 seconds = 52.61 usec/iter"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_soft-_roce |
Chapter 3. Red Hat build of OpenJDK features | Chapter 3. Red Hat build of OpenJDK features 3.1. New features and enhancements This section describes the new features introduced in this release. It also contains information about changes in the existing features. Note For all the other changes and security fixes, see OpenJDK 11.0.13 Released . 3.1.1. Removed IdenTrust root certificate The following root certificate from IdenTrust has been removed from the cacerts keystore: Alias Name: identrustdstx3 [jdk] Distinguished Name: CN=DST Root CA X3, O=Digital Signature Trust Co. For more information, see JDK-8271434 . 3.1.2. Updated keytool to create AKID from SKID for issuing certificate as specified by RFC 5280 The gencert command of the keytool utility has been updated to create AKID from the SKID for issuing certificate as specified by RFC 5280. For more information, see JDK-8261922 . 3.1.3. Added ChaCha20 and Poly1305 TLS cipher suites The new TLS cipher suites using the ChaCha20-Poly1305 algorithm are added to JSSE. These cipher suites are enabled by default. The TLS_CHACHA20_POLY1305_SHA256 cipher suite is available for TLS 1.3. The following cipher suites are available for TLS 1.2: TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 For more information, see JDK-8210799 . 3.1.4. Updated the default enabled cipher suites preference The preference of the default enabled cipher suites are changed. The compatibility impact should be minimal. If needed, applications can customize the enabled cipher suites and its preference. For more information, see JDK-8219551 . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.13/rn-openjdk11013-features |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.