title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 3. Deploy standalone Multicloud Object Gateway
Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_amazon_web_services/deploy-standalone-multicloud-object-gateway
Chapter 1. Support policy for Eclipse Temurin
Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.372_release_notes/openjdk8-temurin-support-policy
2.3. Exclusive Activation of a Volume Group in a Cluster
2.3. Exclusive Activation of a Volume Group in a Cluster The following procedure configures the volume group in a way that will ensure that only the cluster is capable of activating the volume group, and that the volume group will not be activated outside of the cluster on startup. If the volume group is activated by a system outside of the cluster, there is a risk of corrupting the volume group's metadata. This procedure modifies the volume_list entry in the /etc/lvm/lvm.conf configuration file. Volume groups listed in the volume_list entry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the volume_list entry. Note that this procedure does not require the use of clvmd . Perform the following procedure on each node in the cluster. Execute the following command to ensure that locking_type is set to 1 and that use_lvmetad is set to 0 in the /etc/lvm/lvm.conf file. This command also disables and stops any lvmetad processes immediately. Determine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example. Add the volume groups other than my_vg (the volume group you have just defined for the cluster) as entries to volume_list in the /etc/lvm/lvm.conf configuration file. For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment the volume_list line of the lvm.conf file and add these volume groups as entries to volume_list as follows. Note that the volume group you have just defined for the cluster ( my_vg in this example) is not in this list. Note If no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize the volume_list entry as volume_list = [] . Rebuild the initramfs boot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update the initramfs device with the following command. This command may take up to a minute to complete. Reboot the node. Note If you have installed a new Linux kernel since booting the node on which you created the boot image, the new initrd image will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correct initrd device is in use by running the uname -r command before and after the reboot to determine the kernel release that is running. If the releases are not the same, update the initrd file after rebooting with the new kernel and then reboot the node. When the node has rebooted, check whether the cluster services have started up again on that node by executing the pcs cluster status command on that node. If this yields the message Error: cluster is not currently running on this node then enter the following command. Alternately, you can wait until you have rebooted each node in the cluster and start cluster services on each of the nodes with the following command.
[ "lvmconf --enable-halvm --services --startstopservices", "vgs --noheadings -o vg_name my_vg rhel_home rhel_root", "volume_list = [ \"rhel_root\", \"rhel_home\" ]", "dracut -H -f /boot/initramfs-USD(uname -r).img USD(uname -r)", "pcs cluster start", "pcs cluster start --all" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-exclusiveactive-HAAA
Chapter 4. KafkaSpec schema reference
Chapter 4. KafkaSpec schema reference Used in: Kafka Property Description kafka Configuration of the Kafka cluster. KafkaClusterSpec zookeeper Configuration of the ZooKeeper cluster. ZookeeperClusterSpec entityOperator Configuration of the Entity Operator. EntityOperatorSpec clusterCa Configuration of the cluster certificate authority. CertificateAuthority clientsCa Configuration of the clients certificate authority. CertificateAuthority cruiseControl Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified. CruiseControlSpec jmxTrans The jmxTrans property has been deprecated. JMXTrans is deprecated and related resources removed in AMQ Streams 2.5. As of AMQ Streams 2.5, JMXTrans is not supported anymore and this option is ignored. JmxTransSpec kafkaExporter Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition. KafkaExporterSpec maintenanceTimeWindows A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression. string array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaSpec-reference
B.2. Identity Management Replicas
B.2. Identity Management Replicas This guide describes common replication problems for Identity Management in Red Hat Enterprise Linux. Additional resources: For advice on how to test that replication is working, see Section 4.6, "Testing the New Replica" . For advice on how to solve replication conflicts, see Section D.3.3, "Creating and Removing Replication Agreements" and for details, see Section 15.26, "Solving Common Replication Conflicts" in the Directory Server Administration Guide . The Directory Server repl-monitor script shows in-progress status of replication, which can help you troubleshoot replication problems. For more information, see Section 15.24, "Monitoring Replication Status" in the Directory Server Administration Guide . To verify if two Directory Server instances are synchronized, see Section 15.25, "Comparing Two Directory Server Instances" in the Directory Server Administration Guide . B.2.1. Authenticating AD Users Against a New Replica Fails After installing a new replica in an Identity Management - Active Directory trust setup, attempts to authenticate Active Directory (AD) users against the IdM replica fail. What this means: The replica is neither a trust controller nor trust agent. Because of this, it cannot serve information from the AD trust. To fix the problem: Configure the replica as a trust agent. See Trust Controllers and Trust Agents in the Windows Integration Guide . B.2.2. Replica Starts with SASL, GSS-API, and Kerberos Errors in the Directory Server Logs When the replica starts, a series of SASL bind errors are recorded in the Directory Server (DS) logs. The errors state the GSS-API connection failed because it could not find a credentials cache: Additionally, other messages can occur stating that the server could not obtain Kerberos credentials for the host principal: What this means: IdM uses GSS-API for Kerberos connections. The DS instance keeps the Kerberos credentials cache in memory. When the DS process ends, such as when the IdM replica stops, the credentials cache is destroyed. When the replica restarts, DS starts before the KDC server starts. Because of this start order, the Kerberos credentials are not yet saved in the credentials cache when DS starts, which is what causes the errors. After the initial failure, DS re-attempts to establish the GSS-API connection after the KDC starts. This second attempt is successful and ensures that the replica works as expected. You can ignore the described startup errors as long as the GSS-API connection is successfully established and the replica works as expected. The following message shows that the connection was successful: B.2.3. The DNS Forward Record Does Not Match the Reverse Address When configuring a new replica, installation fails with a series of certificate errors, followed by a DNS error stating the DNS forward record does not match the reverse address. What this means: Multiple host names are used for a single PTR record. The DNS standard allows such configuration, but it causes an IdM replica installation to fail. To fix the problem: Verify the DNS configuration, as described in the section called "Verifying the Forward and Reverse DNS Configuration" . B.2.4. Serial Numbers Not Found Errors Note This solution is applicable at domain level 0 . See Chapter 7, Displaying and Raising the Domain Level for details. An error stating that a certificate serial number was not found appears on a replicated server: What this means: A certificate replication agreement between two replicas has been removed but a data replication agreement is still in place. Both replicas are still issuing certificates, but information about the certificates is no longer replicated. Example situation: Replica A issues a certificate to a host. The certificate is not replicated to replica B, because the replicas have no certificate replication agreement established. A user attempts to use replica B to manage the host. Replica B returns an error that it cannot verify the host's certificate serial number. This is because replica B has information about the host in its data directory, but it does not have the host certificate in its certificate directory. To fix the problem: Enable certificate server replication between the two replicas using the ipa-csreplica-manage connect command. See Section D.3.3, "Creating and Removing Replication Agreements" . Re-initialize one of the replicas from the other to synchronize them. See Section D.3.5, "Re-initializing a Replica" . Warning Re-initializing overwrites data on the re-initialized replica with the data from the other replica. Some information might be lost. B.2.5. Cleaning Replica Update Vector (RUV) Errors Note This solution is applicable at domain level 0 . See Chapter 7, Displaying and Raising the Domain Level for details. After a replica has been removed from the IdM topology, obsolete RUV records are now present on one or more remaining replicas. Possible causes: The replica has been removed without properly removing its replication agreements first, as described in the section called "Removing Replication Agreements" . The replica has been removed when another replica was offline. What this means: The other replicas still expect to receive updates from the removed replica. Note The correct procedure for removing a replica is described in Section D.3.6, "Removing a Replica" . To fix the problem: Clean the RUV records on the replica that expects to receive the updates. List the details about the obsolete RUVs using the ipa-replica-manage list-ruv command. The command displays the replica IDs: Clear the corrupt RUVs using the ipa-replica-manage clean-ruv replica_ID command. The command removes any RUVs associated with the specified replica. Repeat the command for every replica with obsolete RUVs. For example: Warning Proceed with extreme caution when using ipa-replica-manage clean-ruv . Running the command against a valid replica ID will corrupt all the data associated with that replica in the replication database. If this happens, re-initialize the replica from another replica as described in Section D.3.5, "Re-initializing a Replica" . Run ipa-replica-manage list-ruv again. If the command no longer displays any corrupt RUVs, the records have been successfully cleaned. If the command still displays corrupt RUVs, clear them manually using this task: If you are not sure on which replica to clean the RUVs: Search all your servers for active replica IDs. Make a list of uncorrupted and reliable replica IDs. To find the IDs of valid replicas, run this LDAP query for all the nodes in your topology: Run ipa-replica-manage list-ruv on every server. Note any replica IDs that are not on the list of uncorrupted replica IDs. Run ipa-replica-manage clean-ruv replica_ID for every corrupted replica ID. B.2.6. Recovering a Lost CA Server Note This solution is applicable at domain level 0 . See Chapter 7, Displaying and Raising the Domain Level for details. You only had one server with CA installed. This server failed and is now lost. What this means: The CA configuration for your IdM domain is no longer available. To fix the problem: If you have a backup of the original CA server available, you can restore the server and install the CA on a replica. Recover the CA server from backup. See Section 9.2, "Restoring a Backup" for details. This makes the CA server available to the replica. Delete the replication agreements between the initial server and the replica to avoid replication conflicts. See Section D.3.3, "Creating and Removing Replication Agreements" . Install the CA on the replica. See Section 6.5.2, "Promoting a Replica to a Master CA Server" . Decommission the original CA server. See Section D.3.6, "Removing a Replica" . If you do not have a backup of the original CA server, the CA configuration was lost when the server failed and cannot be recovered.
[ "slapd_ldap_sasl_interactive_bind - Error: could not perform interactive bind for id [] mech [GSSAPI]: error -2 (Local error) (SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Credentials cache file '/tmp/krb5cc_496' not found))", "set_krb5_creds - Could not get initial credentials for principal [ldap/ replica1.example.com] in keytab [WRFILE:/etc/dirsrv/ds.keytab]: -1765328324 (Generic error)", "Replication bind with GSSAPI auth resumed", "ipa: DEBUG: approved_usage = SSLServer intended_usage = SSLServer ipa: DEBUG: cert valid True for \"CN=replica.example.com,O=EXAMPLE.COM\" ipa: DEBUG: handshake complete, peer = 192.0.2.2:9444 Certificate operation cannot be completed: Unable to communicate with CMS (Not Found) ipa: DEBUG: Created connection context.ldap2_21534032 ipa: DEBUG: Destroyed connection context.ldap2_21534032 The DNS forward record replica.example.com. does not match the reverse address replica.example.org", "Certificate operation cannot be completed: EXCEPTION (Certificate serial number 0x2d not found)", "ipa-replica-manage list-ruv server1.example.com:389: 6 server2.example.com:389: 5 server3.example.com:389: 4 server4.example.com:389: 12", "ipa-replica-manage clean-ruv 6 ipa-replica-manage clean-ruv 5 ipa-replica-manage clean-ruv 4 ipa-replica-manage clean-ruv 12", "dn: cn=clean replica_ID , cn=cleanallruv, cn=tasks, cn=config objectclass: extensibleObject replica-base-dn: dc= example ,dc= com replica-id: replica_ID replica-force-cleaning: no cn: clean replica_ID", "ldapsearch -p 389 -h IdM_node -D \"cn=directory manager\" -W -b \"cn=config\" \"(objectclass=nsds5replica)\" nsDS5ReplicaId" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-replica
Chapter 5. Setting BIOS parameters for system tuning
Chapter 5. Setting BIOS parameters for system tuning The BIOS plays a key role in the functioning of the system. By configuring the BIOS parameters correctly you can significantly improve the system performance. Note Every system and BIOS vendor uses different terms and navigation methods. For more information about BIOS settings, see the BIOS documentation or contact the BIOS vendor. 5.1. Disabling power management to improve response times BIOS power management options help save power by changing the system clock frequency or by putting the CPU into one of various sleep states. These actions are likely to affect how quickly the system responds to external events. To improve response times, disable all power management options in the BIOS. 5.2. Improving response times by disabling error detection and correction units Error Detection and Correction (EDAC) units are devices for detecting and correcting errors signaled from Error Correcting Code (ECC) memory. Usually EDAC options range from no ECC checking to a periodic scan of all memory nodes for errors. The higher the EDAC level, the more time the BIOS uses. This may result in missing crucial event deadlines. To improve response times, turn off EDAC. If this is not possible, configure EDAC to the lowest functional level. 5.3. Improving response time by configuring System Management Interrupts System Management Interrupts (SMIs) are a hardware vendors facility to ensure that the system is operating correctly. The BIOS code usually services the SMI interrupt. SMIs are typically used for thermal management, remote console management (IPMI), EDAC checks, and various other housekeeping tasks. If the BIOS contains SMI options, check with the vendor and any relevant documentation to determine the extent to which it is safe to disable them. Warning While it is possible to completely disable SMIs, Red Hat strongly recommends that you do not do this. Removing the ability of your system to generate and service SMIs can result in catastrophic hardware failure.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/setting-bios-parameters-for-system-tuning_optimizing-rhel9-for-real-time-for-low-latency-operation
Chapter 4. General Updates
Chapter 4. General Updates In-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 An in-place upgrade offers a way of upgrading a system to a new major release of Red Hat Enterprise Linux by replacing the existing operating system. To perform an in-place upgrade, use the Preupgrade Assistant , a utility that checks the system for upgrade issues before running the actual upgrade, and that also provides additional scripts for the Red Hat Upgrade Tool . When you have solved all the problems reported by the Preupgrade Assistant , use the Red Hat Upgrade Tool to upgrade the system. For details regarding procedures and supported scenarios, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Migration_Planning_Guide/chap-Red_Hat_Enterprise_Linux-Migration_Planning_Guide-Upgrading.html and https://access.redhat.com/solutions/637583 . Note that the Preupgrade Assistant and the Red Hat Upgrade Tool are available in the Red Hat Enterprise Linux 6 Extras channel, see https://access.redhat.com/support/policy/updates/extras . (BZ#1432080) The setup package now provides a way to override unpredictable environment settings The setup package now provides and sources the sh.local and csh.local files for overrides of environment variables from the /etc/profile.d directory, which is sourced last. Previously, an undefined order could result in unpredictable environment settings, especially when multiple scripts changed the same environment variable. (BZ# 1344007 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_general_updates
Builds using Shipwright
Builds using Shipwright OpenShift Container Platform 4.16 An extensible build framework to build container images on an OpenShift cluster Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/builds_using_shipwright/index
1.10.3. REDUNDANCY
1.10.3. REDUNDANCY The REDUNDANCY panel allows you to configure of the backup LVS router node and set various heartbeat monitoring options. Figure 1.33. The REDUNDANCY Panel Redundant server public IP The public real IP address for the backup LVS router. Redundant server private IP The backup router's private real IP address. The rest of the panel is for configuring the heartbeat channel, which is used by the backup node to monitor the primary node for failure. Heartbeat Interval (seconds) Sets the number of seconds between heartbeats - the interval that the backup node will check the functional status of the primary LVS node. Assume dead after (seconds) If the primary LVS node does not respond after this number of seconds, then the backup LVS router node will initiate failover. Heartbeat runs on port Sets the port at which the heartbeat communicates with the primary LVS node. The default is set to 539 if this field is left blank.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s2-piranha-redun-cso
Chapter 116. GitHub Component
Chapter 116. GitHub Component Available as of Camel version 2.15 The GitHub component interacts with the GitHub API by encapsulating egit-github . It currently provides polling for new pull requests, pull request comments, tags, and commits. It is also able to produce comments on pull requests, as well as close the pull request entirely. Rather than webhooks, this endpoint relies on simple polling. Reasons include: Concern for reliability/stability The types of payloads we're polling aren't typically large (plus, paging is available in the API) The need to support apps running somewhere not publicly accessible where a webhook would fail Note that the GitHub API is fairly expansive. Therefore, this component could be easily expanded to provide additional interactions. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-github</artifactId> <version>USD{camel-version}</version> </dependency> 116.1. URI format github://endpoint[?options] 116.2. Mandatory Options: Note that these can be configured directly through the endpoint. The GitHub component has no options. The GitHub endpoint is configured using URI syntax: with the following path and query parameters: 116.2.1. Path Parameters (2 parameters): Name Description Default Type type Required What git operation to execute GitHubType branchName Name of branch String 116.2.2. Query Parameters (12 parameters): Name Description Default Type oauthToken (common) GitHub OAuth token, required unless username & password are provided String password (common) GitHub password, required unless oauthToken is provided String repoName (common) Required GitHub repository name String repoOwner (common) Required GitHub repository owner (organization) String username (common) GitHub username, required unless oauthToken is provided String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern encoding (producer) To use the given encoding when getting a git commit file String state (producer) To set git commit status state String targetUrl (producer) To set git commit status target url String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 116.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.github.enabled Enable github component true Boolean camel.component.github.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 116.4. Consumer Endpoints: Endpoint Context Body Type pullRequest polling org.eclipse.egit.github.core.PullRequest pullRequestComment polling org.eclipse.egit.github.core.Comment (comment on the general pull request discussion) or org.eclipse.egit.github.core.CommitComment (inline comment on a pull request diff) tag polling org.eclipse.egit.github.core.RepositoryTag commit polling org.eclipse.egit.github.core.RepositoryCommit 116.5. Producer Endpoints: Endpoint Body Message Headers pullRequestComment String (comment text) - GitHubPullRequest (integer) (REQUIRED): Pull request number. - GitHubInResponseTo (integer): Required if responding to another inline comment on the pull request diff. If left off, a general comment on the pull request discussion is assumed. closePullRequest none - GitHubPullRequest (integer) (REQUIRED): Pull request number. createIssue (From Camel 2.18) String (issue body text) - GitHubIssueTitle (String) (REQUIRED): Issue Title.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-github</artifactId> <version>USD{camel-version}</version> </dependency>", "github://endpoint[?options]", "github:type/branchName" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/github-component
Chapter 4. Changes in common components
Chapter 4. Changes in common components This section explains the changes in basic Eclipse Vert.x components. 4.1. Changes in messaging This section explains the changes in the messaging methods. 4.1.1. Write and end methods in write streams are no longer fluent The WriteStream<T>.write() and WriteStream<T>.end() methods are no longer fluent. Write and end callback methods return void . Other write and end methods return Future<Void> . This is a breaking change. Update your applications if you have used the fluent aspect for write streams. 4.1.2. MessageProducer does not extend WriteStream The MessageProducer interface does not extend the WriteStream interface. In the releases of Eclipse Vert.x, the MessageProducer interface extended the WriteStream interface. The MessageProducer interface provided limited support for message back-pressure. Credit leaks would result in a reduction of credits in the message producer. If these leaks used all the credits, messages would not be sent. However, MessageConsumer will continue to extend ReadStream . When MessageConsumer is paused and the pending message queue is full, the messages are dropped. This continues the integration with Rx generators to build message consuming pipelines. 4.1.3. Removed the send methods from MessageProducer The send methods in the MessageProducer interface have been removed. Use the methods MessageProducer<T>.write(T) instead of MessageProducer<T>.send(T) and EventBus.request(String,Object,Handler) instead of MessageProducer.send(T,Handler) . 4.2. Changes in EventBus The following section describes the changes in EventBus. 4.2.1. Removed the request-response send methods in EventBus The EventBus.send(... , Handler<AsyncResult<Message<T>>>) and Message.reply(... , Handler<AsyncResult<Message<T>>>) methods have been removed. These methods would have caused overloading issues in Eclipse Vert.x 4. The version of the method returning Future<Message<T>> would collide with the fire and forget version. The request-response messaging pattern should use the new request and replyAndRequest methods. Use the method EventBus.request(... , Handler<AsyncResult<Message<T>>>) instead of EventBus.send(... , Handler<AsyncResult<Message<T>>>) to send a message. Use the method Message.replyAndRequest(... , Handler<AsyncResult<Message<T>>>) instead of Message.reply(... , Handler<AsyncResult<Message<T>>>) to reply to the message. The following example shows the request and reply to a message in Eclipse Vert.x 3.x releases. Request Reply The following example shows the request and reply to a message in Eclipse Vert.x 4. Request Reply 4.3. Changes in future This section explains the changes in future. 4.3.1. Support for multiple handlers for futures From Eclipse Vert.x 4 onward, multiple handlers are supported for a future. The Future<T>.setHandler() method used to set a single handler and has been removed. Use Future<T>.onComplete() , Future<T>.onSuccess() , and Future<T>.onFailure() methods instead to call handlers on completion, success, and failure results of an action. The following example shows how to call a handler in Eclipse Vert.x 3.x releases. The following example shows how to call the new Future<T>.onComplete() method in Eclipse Vert.x 4. 4.3.2. Removed the completer() method in future In earlier releases of Eclipse Vert.x, you would use the Future.completer() method to access Handler<AsyncResult<T>> , which was associated with the Future . In Eclipse Vert.x 4, the Future<T>.completer() method has been removed. Future<T> directly extends Handler<AsyncResult<T>> . You can access all the handler methods using the Future object. The Future object is also a handler. 4.3.3. Removed the connection handler method in HTTP client request The HttpClientRequest.connectionHandler() method has been removed. Use HttpClient.connectionHandler() method instead to call connection handlers for client requests in your application. The following example shows how the HttpClientRequest.connectionHandler() method was used in Eclipse Vert.x 3.x releases. The following example shows you how to use the new HttpClient.connectionHandler() method in Eclipse Vert.x 4. 4.4. Changes in verticles This section explains the changes in the verticles. 4.4.1. Updates in the create verticle method In earlier releases of Eclipse Vert.x, VerticleFactory.createVerticle() method synchronously instantiated a verticle. From Eclipse Vert.x 4 onward, the method asynchronously instantiates the verticle and returns the callback Callable<Verticle> instead of the single verticle instance. This improvement enables the application to call this method once and invoke the returned callable multiple times for creating multiple instances. The following code shows how verticles were instantiated in Eclipse Vert.x 3.x releases. The following code shows how verticles are instantiated in Eclipse Vert.x 4. 4.4.2. Updates in the factory class and methods The VerticleFactory class has been simplified. The class does not require initial resolution of an identifier because the factory can instead use nested deployment to deploy the verticle. If your existing applications use factories, in Eclipse Vert.x 4 you can update the code to use a callable when a promise completes or fails. The callable can be called several times. The following example shows existing factories in an Eclipse Vert.x 3.x application. The following example shows how to update existing factories to use promise in Eclipse Vert.x 4. Use the Vertx.executeBlocking() method, if you want the factory to block code. When the factory receives the blocking code, it should resolve the promise and get the verticle instances from the promise. 4.4.3. Removed the multithreaded worker verticles Multi-threaded worker verticle deployment option has been removed. This feature could only be used with Eclipse Vert.x event-bus. Other Eclipse Vert.x components such as HTTP did not support the feature. Use the unordered Vertx.executeBlocking() method to achieve the same functionality as multi-threaded worker deployment. 4.5. Changes in threads This section explains the changes in threads. 4.5.1. Context affinity for non Eclipse Vert.x thread The Vertx.getOrCreateContext() method creates a single context for each non Eclipse Vert.x thread. The non Eclipse Vert.x threads are associated with a context the first time a context is created. In earlier releases, a new context was created each time the method was called from a non Eclipse Vert.x thread. new Thread(() -> { assertSame(vertx.getOrCreateContext(), vertx.getOrCreateContext()); }).start(); This change does not affect your applications, unless your application implicitly relies on a new context to be created with each invocation. In the following example the n blocks run concurrently as each blocking code is called on a different context. for (int i = 0;i < n;i++) { vertx.executeBlocking(block, handler); } To get the same results in Eclipse Vert.x 4, you must update the code: for (int i = 0;i < n;i++) { vertx.executeBlocking(block, false, handler); } 4.6. Changes in HTTP This section explains the changes in HTTP methods. 4.6.1. Generic updates in Eclipse Vert.x HTTP methods The following section describes the miscellaneous updates in Eclipse Vert.x HTTP methods. 4.6.1.1. Updates in HTTP Methods for WebSocket The changes in WebSocket are: The usage of the term WebSocket in method names was inconsistent. The method names had incorrect capitalization, for example, Websocket , instead of WebSocket . The methods that had inconsistent usage of WebSocket in the following classes have been removed. Use the new methods that have correct capitalization instead. The following methods in HttpServerOptions class have been removed. Removed methods New methods getMaxWebsocketFrameSize() getMaxWebSocketFrameSize() setMaxWebsocketFrameSize() setMaxWebSocketFrameSize() getMaxWebsocketMessageSize() getMaxWebSocketMessageSize() setMaxWebsocketMessageSize() setMaxWebSocketMessageSize() getPerFrameWebsocketCompressionSupported() getPerFrameWebSocketCompressionSupported() setPerFrameWebsocketCompressionSupported() setPerFrameWebSocketCompressionSupported() getPerMessageWebsocketCompressionSupported() getPerMessageWebSocketCompressionSupported() setPerMessageWebsocketCompressionSupported() setPerMessageWebSocketCompressionSupported() getWebsocketAllowServerNoContext() getWebSocketAllowServerNoContext() setWebsocketAllowServerNoContext() setWebSocketAllowServerNoContext() getWebsocketCompressionLevel() getWebSocketCompressionLevel() setWebsocketCompressionLevel() setWebSocketCompressionLevel() getWebsocketPreferredClientNoContext() getWebSocketPreferredClientNoContext() setWebsocketPreferredClientNoContext() setWebSocketPreferredClientNoContext() getWebsocketSubProtocols() getWebSocketSubProtocols() setWebsocketSubProtocols() setWebSocketSubProtocols() The new methods for WebSocket subprotocols use List<String> data type instead of a comma separated string to store items. The following methods in HttpClientOptions class have been removed. Removed Methods Replacing Methods getTryUsePerMessageWebsocketCompression() getTryUsePerMessageWebSocketCompression() setTryUsePerMessageWebsocketCompression() setTryUsePerMessageWebSocketCompression() getTryWebsocketDeflateFrameCompression() getTryWebSocketDeflateFrameCompression() getWebsocketCompressionAllowClientNoContext() getWebSocketCompressionAllowClientNoContext() setWebsocketCompressionAllowClientNoContext() setWebSocketCompressionAllowClientNoContext() getWebsocketCompressionLevel() getWebSocketCompressionLevel() setWebsocketCompressionLevel() setWebSocketCompressionLevel() getWebsocketCompressionRequestServerNoContext() getWebSocketCompressionRequestServerNoContext() setWebsocketCompressionRequestServerNoContext() setWebSocketCompressionRequestServerNoContext() The following handler methods in HttpServer class have been removed. Deprecated Methods New Methods websocketHandler() webSocketHandler() websocketStream() webSocketStream() WebsocketRejectedException is deprecated. The methods throw UpgradeRejectedException instead. The HttpClient webSocket() methods use Handler<AsyncResult<WebSocket>> instead of Handler or Handler<Throwable> . The number of overloaded methods to connect an HTTP client to a WebSocket has also been reduced by using the methods in WebSocketConnectOptions class. The HttpServerRequest.upgrade() method has been removed. This method was synchronous. Use the new method HttpServerRequest.toWebSocket() instead. This new method is asynchronous. The following example shows the use of synchronous method in Eclipse Vert.x 3.x. The following example shows the use of asynchronous method in Eclipse Vert.x 4. 4.6.1.2. Setting the number of WebSocket connections In Eclipse Vert.x 3.x, you could use the the HTTP client pool size to define the maximum number of WebSocket connections in an application. The value accessor methods HttpClientOptions.maxPoolSize() were used to get and set the WebSocket connections. The default number of connections was set to 4 for each endpoint. The following example shows how WebSocket connections are set in Eclipse Vert.x 3.x. However, in Eclipse Vert.x 4, there is no pooling of WebSocket TCP connections, because the connections are closed after use. The applications use a different pool for HTTP requests. Use the value accessor methods HttpClientOptions.maxWebSockets() to get and set the WebSocket connections. The default number of connections is set to 50 for each endpoint. The following example shows how to set WebSocket connections in Eclipse Vert.x 4. 4.6.1.3. HttpMethod is available as a interface HttpMethod is available as a new interface. In earlier releases of Eclipse Vert.x, HttpMethod was declared as an enumerated data type. As an enumeration, it limited the extensibility of HTTP. Further, it prevented serving other HTTP methods with this type directly. You had to use the HttpMethod.OTHER value along with the rawMethod attribute during server and client HTTP requests. If you are using HttpMethod enumerated data type in a switch block, you can use the following code to migrate your applications to Eclipse Vert.x 4. The following example shows a switch block in Eclipse Vert.x 3.x releases. switch (method) { case GET: ... break; case OTHER: String s = request.getRawMethod(); if (s.equals("PROPFIND") { ... } else ... } The following example shows a switch block in Eclipse Vert.x 4. switch (method.name()) { case "GET": ... break; case "PROPFIND"; ... break; } You can also use the following code in Eclipse Vert.x 4. If you are using HttpMethod.OTHER value in your applications, use the following code to migrate the application to Eclipse Vert.x 4. The following example shows you the code in Eclipse Vert.x 3.x releases. The following example shows you the code in Eclipse Vert.x 4. 4.6.2. Changes in HTTP client This section describes the changes in HTTP client. The following types of Eclipse Vert.x clients are available: Eclipse Vert.x web client Use the Eclipse Vert.x web client when your applications are web oriented. For example, REST, encoding and decoding HTTP payloads, interpreting the HTTP status response code, and so on. Eclipse Vert.x HTTP client Use the Eclipse Vert.x HTTP client when your applications are used as HTTP proxy. For example, as an API gateway. The HTTP client has been updated and improved in Eclipse Vert.x 4. Note Eclipse Vert.x web client is based on Eclipse Vert.x HTTP client. 4.6.2.1. Migrating applications to Eclipse Vert.x web client The web client was available from Eclipse Vert.x 3.4.0 release. There is no change in the web client in Eclipse Vert.x 4. The client provides simplified HTTP interactions and some additional features, such as HTTP session, JSON encoding and decoding, response predicates, which are not available in the Eclipse Vert.x HTTP Client. The following example shows how to use HTTP client in Eclipse Vert.x 3.x releases. HttpClientRequest request = client.get(80, "example.com", "/", response -> { int statusCode = response.statusCode(); response.exceptionHandler(err -> { // Handle connection error, for example, connection closed }); response.bodyHandler(body -> { // Handle body entirely }); }); request.exceptionHandler(err -> { // Handle connection error OR response error }); request.end(); The following example shows how to migrate an application to web client in Eclipse Vert.x 3.x and Eclipse Vert.x 4 releases. client.get(80, "example.com", "/some-uri") .send(ar -> { if (ar.suceeded()) { HttpResponse<Buffer> response = ar.result(); // Handle response } else { // Handle error } }); 4.6.2.2. Migrating applications to Eclipse Vert.x HTTP client The HTTP client has fine grained control over HTTP interactions and focuses on the HTTP protocol. The HTTP client has been updated and improved in Eclipse Vert.x 4: Simplified APIs with fewer interactions Robust error handling Support for connection reset for HTTP/1 The updates in HTTP client APIs are: The methods in HttpClientRequest such as, get() , delete() , put() have been removed. Use the method HttpClientRequest> request(HttpMethod method, ...) instead. HttpClientRequest instance is created when a request or response is possible. For example, an HttpClientRequest instance is created when the client connects to the server or a connection is reused from the pool. 4.6.2.2.1. Sending a simple request The following example shows how to send a GET request in Eclipse Vert.x 3.x releases. HttpClientRequest request = client.get(80, "example.com", "/", response -> { int statusCode = response.statusCode(); response.exceptionHandler(err -> { // Handle connection error, for example, connection closed }); response.bodyHandler(body -> { // Handle body entirely }); }); request.exceptionHandler(err -> { // Handle connection error OR response error }); request.end(); The following example shows how to send a GET request in Eclipse Vert.x 4. client.request(HttpMethod.GET, 80, "example.com", "/", ar -> { if (ar.succeeded()) { HttpClientRequest = ar.result(); request.send(ar2 -> { if (ar2.succeeded()) { HttpClientResponse = ar2.result(); int statusCode = response.statusCode(); response.body(ar3 -> { if (ar3.succeeded()) { Buffer body = ar3.result(); // Handle body entirely } else { // Handle server error, for example, connection closed } }); } else { // Handle server error, for example, connection closed } }); } else { // Connection error, for example, invalid server or invalid SSL certificate } }); You can see that error handling is better in the new HTTP client. The following example shows how to use future composition in a GET operation in Eclipse Vert.x 4. Future<Buffer> fut = client.request(HttpMethod.GET, 80, "example.com", "/") .compose(request -> request.send().compose(response -> { int statusCode = response.statusCode(); if (statusCode == 200) { return response.body(); } else { return Future.failedFuture("Unexpectd status code"); } }) }); fut.onComplete(ar -> { if (ar.succeeded()) { Buffer body = ar.result(); // Handle body entirely } else { // Handle error } }); Future composition improves exception handling. The example checks if the status code is 200, otherwise it returns an error. Warning When you use the HTTP client with futures, the HttpClientResponse() method starts emitting buffers as soon as it receives a response. To avoid this, ensure that the future composition occurs either on the event-loop (as shown in the example) or it should pause and resume the response. 4.6.2.2.2. Sending requests In Eclipse Vert.x 3.x releases, you could use the end() method to send requests. You could also send a body in the request. Since HttpClientRequest is a Writestream<Buffer> , you could also use a pipe to stream the request. writeStream.pipeTo(request, ar -> { if (ar.succeeded()) { // Sent the stream } }); In Eclipse Vert.x 4, you can perform all the operations shown in the examples using the get() method. You can also use the new send() method to perform these operations. You can pass a buffer, a string, or a ReadStream as input to the send() method. The method returns an HttpClientResponse instance. // Send a request and process the response request.onComplete(ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); // Handle the response } }) request.end(); // The new send method combines all the operations request.send(ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); // Handle the response } })); 4.6.2.2.3. Handling responses The HttpClientResponse interface has been updated and improved with the following methods: body() method The body() method returns an asynchronous buffer. Use the body() method instead of bodyHandler() . The following example shows how to use the bodyHandler() method to get the request body. response.bodyHandler(body -> { // Process the request body }); response.exceptionHandler(err -> { // Could not get the request body }); The following example shows how to use the body() method to get the request body. response.body(ar -> { if (ar.succeeded()) { // Process the request body } else { // Could not get the request body } }); end() method The end() method returns a future when a response is fully received successfully or failed. The method removes the response body. Use this method instead of endHandler() method. The following example shows how to use the endHandler() method. response.endHandler(v -> { // Response ended }); response.exceptionHandler(err -> { // Response failed, something went wrong }); The following example shows how to use the end() method. response.end(ar -> { if (ar.succeeded()) { // Response ended } else { // Response failed, something went wrong } }); You can also handle the response with methods such as, onSucces() , compose() , bodyHandler() and so on. The following examples demonstrate handling responses using the onSuccess() method. The following example shows how to use HTTP client with the result() method in Eclipse Vert.x 3.x releases. HttpClient client = vertx.createHttpClient(options); client.request(HttpMethod.GET, 8443, "localhost", "/") .onSuccess(request -> { request.onSuccess(resp -> { //Code to handle HTTP response }); }); The following example shows how to use HTTP client with the result() method in Eclipse Vert.x 4. HttpClient client = vertx.createHttpClient(options); client.request(HttpMethod.GET, 8443, "localhost", "/") .onSuccess(request -> { request.response().onSuccess(resp -> { //Code to handle HTTP response }); }); 4.6.2.3. Improvements in the Eclipse Vert.x HTTP client This section describes the improvements in HTTP client. 4.6.2.3.1. HTTP client request and response methods take an asynchronous handler as input argument The HttpClient and HttpClientRequest methods have been updated to use asynchronous handlers. The methods take Handler<AsyncResult<HttpClientResponse>> as input instead of Handler<HttpClientResponse> . In earlier releases of Eclipse Vert.x, the HttpClient methods getNow() , optionsNow() and headNow() used to return HttpClientRequest , that you had to further send to perform a request. The getNow() , optionsNow() and headNow() methods have been removed. In Eclipse Vert.x 4, you can directly send a request with the required information using Handler<AsyncResult<HttpClientResponse>> . The following examples show how to send a request in Eclipse Vert.x 3.x. To perform a GET operation: Future<HttpClientResponse> f1 = client.get(8080, "localhost", "/uri", HttpHeaders.set("foo", "bar")); To POST with a buffer body: Future<HttpClientResponse> f2 = client.post(8080, "localhost", "/uri", HttpHeaders.set("foo", "bar"), Buffer.buffer("some-data")); To POST with a streaming body: Future<HttpClientResponse> f3 = client.post(8080, "localhost", "/uri", HttpHeaders.set("foo", "bar"), asyncFile); In Eclipse Vert.x 4, you can use the requests methods to create an HttpClientRequest instance. These methods can be used in basic interactions such as: Sending the request headers HTTP/2 specific operations such as setting a push handler, setting stream priority, pings, and so on. Creating a NetSocket tunnel Providing fine grained write control Resetting a stream Handling 100 continue headers manually The following example shows you how to create an HTTPClientRequest in Eclipse Vert.x 4. client.request(HttpMethod.GET, 8080, "example.com", "/resource", ar -> { if (ar.succeeded()) { HttpClientRequest request = ar.result(); request.putHeader("content-type", "application/json") request.send(new JsonObject().put("hello", "world")) .onSuccess(response -> { // }).onFailure(err -> { // }); } }) 4.6.2.3.2. Removed the connection handler method from HTTP client request The HttpClientRequest.connectionHandler() method has been removed. Use HttpClient.connectionHandler() method instead to call connection handlers for client requests in your application. The following example shows how the HttpClientRequest.connectionHandler() method was used in Eclipse Vert.x 3.x releases. client.request().connectionHandler(conn -> { // Connection related code }).end(); The following example shows you how to use the new HttpClient.connectionHandler() method. client.connectionHandler(conn -> { // Connection related code }); 4.6.2.3.3. HTTP client tunneling using the net socket method HTTP tunnels can be created using the HttpClientResponse.netSocket() method. In Eclipse Vert.x 4 this method has been updated. To get a net socket for the connection of the request, send a socket handler in the request. The handler is called when the HTTP response header is received. The socket is ready for tunneling and can send and receive buffers. The following example shows how to get net socket for a connection in Eclipse Vert.x 3.x releases. client.request(HttpMethod.CONNECT, uri, ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); if (response.statusCode() == 200) { NetSocket so = response.netSocket(); } } }).end(); The following example shows how to get net socket for a connection in Eclipse Vert.x 4. client.request(HttpMethod.CONNECT, uri, ar -> { }).netSocket(ar -> { if (ar.succeeded()) { // Got a response with a 200 status code NetSocket so = ar.result(); // Go for tunneling } }).end(); 4.6.2.3.4. New send() method in HttpClient class A new send() method is available in the HttpClient class. The following code shows how to send a request in Eclipse Vert.x 4. 4.6.2.3.5. HttpHeaders is an interface and contains MultiMap methods In Eclipse Vert.x 4, HttpHeaders is an interface. In earlier releases of Eclipse Vert.x, HttpHeaders was a class. The following new MultiMap methods have been added in the HttpHeaders interface. Use these methods to create MultiMap instances. MultiMap.headers() MultiMap.set(CharSequence name, CharSequence value) MultiMap.set(String name, String value) The following example shows how MultiMap instances were created in Eclipse Vert.x 3.x releases. The following examples show how to create MultiMap instances in Eclipse Vert.x 4. 4.6.2.3.6. CaseInsensitiveHeaders class is no longer public The CaseInsensitiveHeaders class is no longer public. Use the MultiMap.caseInsensitiveMultiMap() method to create a multi-map implementation with case insensitive keys. The following example shows how CaseInsensitiveHeaders method was used in Eclipse Vert.x 3.x releases. The following examples show how MultiMap method is used in Eclipse Vert.x 4. OR 4.6.2.3.7. Checking the version of HTTP running on the server In earlier releases of Eclipse Vert.x, the version of HTTP running on a server was checked only if the application explicitly called the HttpServerRequest.version() method. If the HTTP version was HTTP/1.x, the method would return the 501 HTTP status, and close the connection. From Eclipse Vert.x 4 onward, before a request is sent to the server, the HTTP version on the server is automatically checked by calling the HttpServerRequest.version() method. The method returns the HTTP version instead of throwing an exception when an invalid HTTP version is found. 4.6.2.3.8. New methods in request options In Eclipse Vert.x 4, the following new methods are available in the RequestOptions class: Header FollowRedirects Timeout The following example shows how to use the new methods. client.request(HttpMethod.GET, 8080, "example.com", "/resource", ar -> { if (ar.succeeded()) { HttpClientRequest request = ar.result(); request.putHeader("content-type", "application/json") request.send(new JsonObject().put("hello", "world")) .onSuccess(response -> { // }).onFailure(err -> { // }); } }) 4.7. Changes in connection methods This section explains the changes in connection methods. 4.7.1. Checking if authentication is required for client The NetServerOptions.isClientAuthRequired() method has been removed. Use the getClientAuth() == ClientAuth.REQUIRED enumerated type to check if client authentication is required. The following example shows how to use a switch statement to check if authentication of the client is required. The following example shows how to use the check if authentication of the client is required in Eclipse Vert.x 4. 4.7.2. Upgrade SSL method uses asynchronous handler The NetSocket.upgradeToSsl() method has been updated to use Handler<AsyncResult> instead of Handler . The handler is used to check if the channel has been successfully upgraded to SSL or TLS. 4.8. Changes in logging This section explains the changes in logging. 4.8.1. Deprecated logging classes and methods The logging classes Logger and LoggerFactory along with their methods have been deprecated. These logging classes and methods will be removed in a future release. 4.8.2. Removed Log4j 1 logger The Log4j 1 logger is no longer available. However, if you want to use Log4j 1 logger, it is available with SLF4J . 4.9. Changes in Eclipse Vert.x Reactive Extensions (Rx) This section describes the changes in Reactive Extensions (Rx) in Eclipse Vert.x. Eclipse Vert.x uses the RxJava library. 4.9.1. Support for RxJava 3 From Eclipse Vert.x 4.1.0, RxJava 3 is supported. A new rxified API is available in the io.vertx.rxjava3 package. Integration with Eclipse Vert.x JUnit5 is provided by the vertx-junit5-rx-java3 binding. To upgrade to RxJava 3, you must make the following changes: In the pom.xml file, under <dependency> change the RxJava 1 and 2 bindings from vertx-rx-java or vertx-rx-java2 to vertx-rx-java3 . In your application, update the imports from io.vertx.reactivex.* to io.vertx.rxjava3.* . In your application, update the imports for RxJava 3 types also. For more information, see What's new section in RxJava 3 documentation. 4.9.2. Removed onComplete callback from write stream The WriteStreamSubscriber.onComplete() callback has been removed. This callback was invoked if WriteStream had pending streams of data to be written. In Eclipse Vert.x 4, use the callbacks WriteStreamSubscriber.onWriteStreamEnd() and WriteStreamSubscriber.onWriteStreamError() instead. These callbacks are called after WriteStream.end() is complete. WriteStreamSubscriber<Buffer> subscriber = writeStream.toSubscriber(); The following example shows how to create the adapter from a WriteStream in Eclipse Vert.x 3.x releases. subscriber.onComplete(() -> { // Called after writeStream.end() is invoked, even if operation has not completed }); The following examples show how to create the adapter from a WriteStream using the new callback methods in Eclipse Vert.x 4 release: subscriber.onWriteStreamEnd(() -> { // Called after writeStream.end() is invoked and completes successfully }); subscriber.onWriteStreamError(() -> { // Called after writeStream.end() is invoked and fails }); 4.10. Changes in Eclipse Vert.x configuration The following section describes the changes in Eclipse Vert.x configuration. 4.10.1. New method to retrieve configuration The method ConfigRetriever.getConfigAsFuture() has been removed. Use the method retriever.getConfig() instead. The following example shows how configuration was retrieved in Eclipse Vert.x 3.x releases. Future<JsonObject> fut = ConfigRetriever. getConfigAsFuture(retriever); The following example shows how to retrieve configuration in Eclipse Vert.x 4. fut = retriever.getConfig(); 4.11. Changes in JSON This section describes changes in JSON. 4.11.1. Encapsulation of Jackson All the methods in the JSON class that implement Jackson types have been removed. Use the following methods instead: Removed Fields/Methods New methods Json.mapper() field DatabindCodec.mapper() Json.prettyMapper() field DatabindCodec.prettyMapper() Json.decodeValue(Buffer, TypeReference<T>) JacksonCodec.decodeValue(Buffer, TypeReference) Json.decodeValue(String, TypeReference<T>) JacksonCodec.decodeValue(String, TypeReference) For example, use the following code: When using Jackson TypeReference : In Eclipse Vert.x 3.x releases: List<Foo> foo1 = Json.decodeValue(json, new TypeReference<List<Foo>>() {}); In Eclipse Vert.x 4 release: List<Foo> foo2 = io.vertx.core.json.jackson.JacksonCodec.decodeValue(json, new TypeReference<List<Foo>>() {}); Referencing an ObjectMapper : In Eclipse Vert.x 3.x releases: ObjectMapper mapper = Json.mapper; In Eclipse Vert.x 4 release: mapper = io.vertx.core.json.jackson.DatabindCodec.mapper(); Setting an ObjectMapper : In Eclipse Vert.x 3.x releases: Json.mapper = someMapper; From Eclipse Vert.x 4 onward, you cannot write a mapper instance. You should use your own static mapper or configure the Databind.mapper() instance. 4.11.2. Object mapping In earlier releases, the Jackson core and Jackson databind dependencies were required at runtime. From Eclipse Vert.x 4 onward, only the Jackson core dependency is required. You will require the Jackson databind dependency only if you are object mapping JSON. In this case, you must explicitly add the dependency in your project descriptor in the com.fasterxml.jackson.core:jackson-databind jar. The following methods are supported for the mentioned types. Methods JsonObject.mapFrom(Object) JsonObject.mapTo(Class) Json.decodeValue(Buffer, Class) Json.decodeValue(String, Class) Json.encode(Object) Json.encodePrettily(Object) Json.encodeToBuffer(Object) Type JsonObject and JsonArray Map and List Number Boolean Enum byte[] and Buffer Instant The following methods are supported only with Jackson bind: JsonObject.mapTo(Object) JsonObject.mapFrom(Object) 4.11.3. Base64 encoder updated to Base64URL for JSON objects and arrays The Eclipse Vert.x JSON types implement RFC-7493. In earlier releases of Eclipse Vert.x, the implementation incorrectly used Base64 encoder instead of Base64URL. This has been fixed in Eclipse Vert.x 4, and Base64URL encoder is used in the JSON types. If you want to continue using the Base64 encoder in Eclipse Vert.x 4, you can use the configuration flag legacy . The following example shows how to set the configuration flag in Eclipse Vert.x 4. java -Dvertx.json.base64=legacy ... During your migration from Eclipse Vert.x 3.x to Eclipse Vert.x 4 if you have partially migrated your applications, then you will have applications on both version 3 and 4. In such cases where you have two versions of Eclipse Vert.x you can use the following utility to convert the Base64 string to Base64URL. public String toBase64(String base64Url) { return base64Url .replace('+', '-') .replace('/', '_'); } public String toBase64Url(String base64) { return base64 .replace('-', '+') .replace('_', '/'); } You must use the utility methods in the following scenarios: Handling integration while migrating from Eclipse Vert.x 3.x releases to Eclipse Vert.x 4. Handling interoperability with other systems that use Base64 strings. Use the following example code to convert a Base64URL to Base64 encoder. String base64url = someJsonObject.getString("base64encodedElement") String base64 = toBase64(base64url); The helper functions toBase64 and toBase64Url enable only JSON migrations. If you use object mapping to automatically map JSON objects to a Java POJO in your applications, then you must create a custom object mapper to convert the Base64 string to Base64URL. The following example shows you how to create a object mapper with custom Base64 decoder. // simple deserializer from Base64 to byte[] class ByteArrayDeserializer extends JsonDeserializer<byte[]> { ByteArrayDeserializer() { } public byte[] deserialize(JsonParser p, DeserializationContext ctxt) { String text = p.getText(); return Base64.getDecoder() .decode(text); } } // ... ObjectMapper mapper = new ObjectMapper(); // create a custom module to address the Base64 decoding SimpleModule module = new SimpleModule(); module.addDeserializer(byte[].class, new ByteArrayDeserializer()); mapper.registerModule(module); // JSON to POJO with custom deserializer mapper.readValue(json, MyClass.class); 4.11.4. Removed the JSON converter method from trust options The TrustOptions.toJSON method has been removed. 4.12. Changes in Eclipse Vert.x web The following section describes the changes in Eclipse Vert.x web. 4.12.1. Combined the functionality of user session handler in session handler In earlier releases of Eclipse Vert.x, you had to specify both the UserSessionHandler and SessionHandler handlers when working in a session. To simplify the process, in Eclipse Vert.x 4, the UserSessionHandler class has been removed and its functionality has been added in the SessionHandler class. In Eclipse Vert.x 4, to work with sessions you must specify only one handler. 4.12.2. Removed the cookie interfaces The following cookie interfaces have been removed: io.vertx.ext.web.Cookie io.vertx.ext.web.handler.CookieHandler Use the io.vertx.core.http.Cookie interface instead. 4.12.3. Favicon and error handlers use Vertx file system The create methods in FaviconHandler and ErrorHandler have been updated. You must pass a Vertx instance object in the create methods. These methods access file system. Passing the Vertx object ensures consistent access to files using the 'Vertx' file system. The following example shows how create methods were used in Eclipse Vert.x 3.x releases. FaviconHandler.create(); ErrorHandler.create(); The following example shows how create methods should be used in Eclipse Vert.x 4. FaviconHandler.create(vertx); ErrorHandler.create(vertx); 4.12.4. Accessing the template engine Use the method TemplateEngine.unwrap() to access the template engine. You can then apply customizations and configurations to the template. The following methods that are used to get and set the engine configurations have been deprecated. Use the TemplateEngine.unwrap() method instead. HandlebarsTemplateEngine.getHandlebars() HandlebarsTemplateEngine.getResolvers() HandlebarsTemplateEngine.setResolvers() JadeTemplateEngine.getJadeConfiguration() ThymeleafTemplateEngine.getThymeleafTemplateEngine() ThymeleafTemplateEngine.setMode() 4.12.5. Removed the locale interface The io.vertx.ext.web.Locale interface has been removed. Use the io.vertx.ext.web.LanguageHeader interface instead. 4.12.6. Removed the acceptable locales method The RoutingContext.acceptableLocales() method has been removed. Use the RoutingContext.acceptableLanguages() method instead. 4.12.7. Updated the method for mounting sub routers In earlier releases of Eclipse Vert.x, the Router.mountSubRouter() method incorrectly returned a Router . This has been fixed, and the method now returns a Route . 4.12.8. Removed the create method with excluded strings for JWT authentication handling The JWTAuthHandler.create(JWTAuth authProvider, String skip) method has been removed. Use the JWTAuthHandler.create(JWTAuth authProvider) method instead. The following example shows how JWT authentication handler was created in Eclipse Vert.x 3.x releases. router // protect everything but "/excluded/path" .route().handler(JWTAuthHandler(jwtAuth, "/excluded/path") The following example shows how JWT authentication handler was created in Eclipse Vert.x 4. router .route("/excluded/path").handler(/* public access to "/excluded/path" */) // protect everything .route().handler(JWTAuthHandler(jwtAuth) 4.12.9. Removed the create handler method that was used in OSGi environments In Eclipse Vert.x 4, OSGi environment is no longer supported. The StaticHandler.create(String, ClassLoader) method has been removed because the method was used in the OSGi environment. If you have used this method in your applications, then in Eclipse Vert.x 4 you can either add the resources to the application classpath or serve resources from the file system. 4.12.10. Removed the bridge options class The sockjs.BridgeOptions class has been removed. Use the new sockjs.SockJSBridgeOptions class instead. The sockjs.SockJSBridgeOptions class contains all the options that are required to configure the event bus bridge. There is no change in the behavior of the new class, except that the name of the data object class has changed. In releases, when you used sockjs.BridgeOptions class to add new bridges, there were a lot of duplicate configurations. The new class contains all the possible common configurations, and removes duplicate configurations. 4.12.11. SockJS socket event bus does not register a clustered event by default SockJSSocket no longer registers a clustered event bus consumer by default. If you want to write to the socket using the event bus, you must enable the writeHandler in SockJSHandlerOptions . When you enable the writeHandler , the event bus consumer is set to local by default. Router router = Router.router(vertx); SockJSHandlerOptions options = new SockJSHandlerOptions() .setRegisterWriteHandler(true); // enable the event bus consumer registration SockJSHandler sockJSHandler = SockJSHandler.create(vertx, options); router.mountSubRouter("/myapp", sockJSHandler.socketHandler(sockJSSocket -> { // Retrieve the writeHandlerID and store it (For example, in a local map) String writeHandlerID = sockJSSocket.writeHandlerID(); })); You can configure the event bus consumer to a cluster. SockJSHandlerOptions options = new SockJSHandlerOptions() .setRegisterWriteHandler(true) // enable the event bus consumer registration .setLocalWriteHandler(false) // register a clustered event bus consumer 4.12.12. New method for adding authentication provider The SessionHandler.setAuthProvider(AuthProvider) method has been deprecated. Use the SessionHandler.addAuthProvider() method instead. The new method allows an application to work with multiple authentication providers and link the session objects to these authentication providers. 4.12.13. OAuth2 authentication provider create methods require vertx as constructor argument From Eclipse Vert.x 4, OAuth2Auth.create(Vertx vertx) method requires vertx as a constructor argument. The vertx argument uses a secure non-blocking random number generator to generate nonce which ensures better security for applications. 4.13. Changes in Eclipse Vert.x Web GraphQL The following section describes the changes in Eclipse Vert.x Web GraphQL. Important Eclipse Vert.x Web GraphQL is provided as Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. 4.13.1. Updated methods to be supported on multiple language (polyglot) environments The following methods have been updated and are now supported on polyglot environments: * UploadScalar is now a factory, use the method UploadScalar.create() instead. VertxBatchLoader is now a factory, use the method io.vertx.ext.web.handler.graphql.dataloader.VertxBatchLoader.create() instead. VertxDataFetcher is now a factory, use the method io.vertx.ext.web.handler.graphql.schema.VertxDataFetcher.create() instead. VertxPropertyDataFetcher is now a factory, use the method io.vertx.ext.web.handler.graphql.schema.VertxPropertyDataFetcher.create() instead. 4.13.2. Handling POST requests in Eclipse Vert.x Web GraphQL In prior releases, the Eclipse Vert.x Web GraphQL handler could process its own POST requests. It did not need Eclipse Vert.x Web BodyHandler to process the requests. However, this implementation was susceptible to DDoS attacks. From Eclipse Vert.x 4 onward, to process POST requests BodyHandler is required. You must install BodyHandler before installing Eclipse Vert.x Web GraphQL handler. 4.14. Changes in Micrometer metrics The following section describes the changes in Micrometer metrics. 4.14.1. TCP sent and received bytes are recorded as counters with equivalent HTTP request and response summaries In prior releases, the following metrics were recorded as distribution summaries for sockets. From Eclipse Vert.x 4 onward, these metrics are logged as counter, which report the amount of data exchanged. Net client vertx_net_client_bytes_read vertx_net_client_bytes_written Net server vertx_net_server_bytes_read vertx_net_server_bytes_written For these counters, equivalent distribution summaries have been introduced for HTTP. These summaries are used to collect information about the request and response sizes. HTTP client vertx_http_client_request_bytes vertx_http_client_response_bytes HTTP server vertx_http_server_request_bytes vertx_http_server_response_bytes 4.14.2. Renamed the metrics The following metrics have been renamed. Old metrics name New metrics name Updated in components *_connections *_active_connections Net client and server HTTP client and server *_bytesReceived *_bytes_read Datagram Net client and server HTTP client and server *_bytesSent *_bytes_written Datagram Net client and server HTTP client and server *_requests *_active_requests HTTP client HTTP server *_requestCount_total *_requests_total HTTP client HTTP server *_responseTime_seconds *_response_time_seconds HTTP client HTTP server *_responseCount_total *_responses_total HTTP client HTTP server *_wsConnections *_active_ws_connections HTTP client HTTP server vertx_http_client_queue_delay_seconds vertx_http_client_queue_time_seconds vertx_http_client_queue_size vertx_http_client_queue_pending vertx_http_server_requestResetCount_total vertx_http_server_request_resets_total vertx_eventbus_bytesWritten vertx_eventbus_bytes_written vertx_eventbus_bytesRead vertx_eventbus_bytes_read vertx_eventbus_replyFailures vertx_eventbus_reply_failures vertx_pool_queue_delay_seconds vertx_pool_queue_time_seconds vertx_pool_queue_size vertx_pool_queue_pending vertx_pool_inUse vertx_pool_in_use 4.15. Changes in Eclipse Vert.x OpenAPI In Eclipse Vert.x 4, a new module vertx-web-openapi is available. Use this module alone with vertx-web to develop contract-driven applications. The new module works well with Eclipse Vert.x Web Router . The new module requires the following Eclipse Vert.x dependencies: vertx-json-schema vertx-web-validation The new module is available in the package io.vertx.ext.web.openapi . In Eclipse Vert.x 4, the older OpenAPI module vertx-web-api-contract is supported to facilitate the migration to the new module. It is recommended that you move to the new module vertx-web-openapi to take advantage of the new functionality. 4.15.1. New module uses router builder The vertx-web-openapi module uses RouterBuilder to build the Eclipse Vert.x Web router. This router builder is similar to the router builer OpenAPI3RouterFactory in vertx-web-api-contract module. To start working with the vertx-web-openapi module, instantiate the RouterBuilder . RouterBuilder.create(vertx, "petstore.yaml").onComplete(ar -> { if (ar.succeeded()) { // Spec loaded with success RouterBuilder routerBuilder = ar.result(); } else { // Something went wrong during router builder initialization Throwable exception = ar.cause(); } }); You can also instantiate the RouterBuilder using futures. RouterBuilder.create(vertx, "petstore.yaml") .onSuccess(routerBuilder -> { // Spec loaded with success }) .onFailure(exception -> { // Something went wrong during router builder initialization }); Note The vertx-web-openapi module uses the Eclipse Vert.x file system APIs to load the files. Therefore, you do not have to specify / for the classpath resources. For example, you can specify petstore.yaml in your application. The RouterBuilder can identify the contract from your classpath resources. 4.15.2. New router builder methods In most cases, you can search and replace usages of old OpenAPI3RouterFactory methods with the new RouterBuilder methods. The following table lists a few examples of old and new methods. Old OpenAPI3RouterFactory methods New RouterBuilder methods routerFactory.addHandlerByOperationId("getPets", handler) routerBuilder.operation("getPets").handler(handler) routerFactory.addFailureHandlerByOperationId("getPets", handler) routerBuilder.operation("getPets").failureHandler(handler) routerFactory.mountOperationToEventBus("getPets", "getpets.myapplication") routerBuilder.operation("getPets").routeToEventBus("getpets.myapplication") routerFactory.addGlobalHandler(handler) routerBuilder.rootHandler(handler) routerFactory.addBodyHandler(handler) routerBuilder.bodyHandler(handler) routerFactory.getRouter() routerBuilder.createRouter() Use the following syntax to access the parsed request parameters: RequestParameters parameters = routingContext.get(io.vertx.ext.web.validation.ValidationHandler.REQUEST_CONTEXT_KEY); int aParam = parameters.queryParameter("aParam").getInteger(); 4.15.3. Handling security In Eclipse Vert.x 4, the methods RouterFactory.addSecurityHandler() and OpenAPI3RouterFactory.addSecuritySchemaScopeValidator() are no longer available. Use the RouterBuilder.securityHandler() method instead. This method accepts io.vertx.ext.web.handler.AuthenticationHandler as an handler. The method automatically recognizes OAuth2Handler and sets up the security schema. The new security handlers also implement the operations defined in the OpenAPI specification . 4.15.4. Handling common failures In vertx-web-openapi module, the following failure handlers are not available. You must set up failure handlers using the Router.errorHandler(int, Handler) method. Old methods in`vertx-web-api-contract` module New methods in vertx-web-openapi module routerFactory.setValidationFailureHandler(handler) router.errorHandler(400, handler) routerBuilder.setNotImplementedFailureHandler(handler) router.errorHandler(501, handler) 4.15.5. Accessing the OpenAPI contract model In Eclipse Vert.x 4, the OpenAPI contract is not mapped to plain old Java object (POJO). So, the additional swagger-parser dependency is no longer required. You can use the getters and resolvers to retrieve specific components of the contract. The following example shows how to retrieve a specific component using a single operation. JsonObject model = routerBuilder.operation("getPets").getOperationModel(); The following example shows how to retrieve the full contract. JsonObject contract = routerBuilder.getOpenAPI().getOpenAPI(); The following example shows you how to resolve parts of the contract. JsonObject petModel = routerBuilder.getOpenAPI().getCached(JsonPointer.from("/components/schemas/Pet")); 4.15.6. Validating web requests without OpenAPI In the vertx-web-api-contract module, you could validate HTTP requests using HTTPRequestValidationHandler . You did not have to use OpenAPI for validations. In Eclipse Vert.x 4, to validate HTTP requests use vertx-web-validation module. You can import this module and validate requests without using OpenAPI. Use ValidationHandler to validate requests. 4.15.7. Updates in the Eclipse Vert.x web API service The vertx-web-api-service module has been updated and can be used with the vertx-web-validation module. If you are working with vertx-web-openapi module, there is no change in the web service functionality. However, if you do not use OpenAPI, then to use the web service module with vertx-web-validation module you must use the RouteToEBServiceHandler class. router.get("/api/transactions") .handler( ValidationHandlerBuilder.create(schemaParser) .queryParameter(optionalParam("from", stringSchema())) .queryParameter(optionalParam("to", stringSchema())) .build() ).handler( RouteToEBServiceHandler.build(eventBus, "transactions.myapplication", "getTransactionsList") ); The vertx-web-api-service module does not support vertx-web-api-contract . So, when you upgrade to Eclipse Vert.x 4, you must migrate your Eclipse Vert.x OpenAPI applications to vertx-web-openapi module.
[ "eventBus.send(\"the-address\", body, ar -> ...);", "eventBus.consumer(\"the-address\", message -> { message.reply(body, ar -> ...); });", "eventBus.request(\"the-address\", body, ar -> ...);", "eventBus.consumer(\"the-address\", message -> { message.replyAndRequest(body, ar -> ...); });", "Future<String> fut = getSomeFuture(); fut.setHandler(ar -> ...);", "Future<String> fut = getSomeFuture(); fut.onComplete(ar -> ...);", "client.request().connectionHandler(conn -> { // Connection related code }).end();", "client.connectionHandler(conn -> { // Connection related code });", "Verticle createVerticle(String verticleName, ClassLoader classLoader) throws Exception;", "void createVerticle(String verticleName, ClassLoader classLoader, Promise<Callable<Verticle>> promise);", "return new MyVerticle();", "promise.complete(() -> new MyVerticle());", "new Thread(() -> { assertSame(vertx.getOrCreateContext(), vertx.getOrCreateContext()); }).start();", "for (int i = 0;i < n;i++) { vertx.executeBlocking(block, handler); }", "for (int i = 0;i < n;i++) { vertx.executeBlocking(block, false, handler); }", "// 3.x server.requestHandler(req -> { WebSocket ws = req.upgrade(); });", "// 4.x server.requestHandler(req -> { Future<WebSocket> fut = req.toWebSocket(); fut.onSuccess(ws -> { }); });", "// 3.x options.setMaxPoolSize(30); // Maximum connection is set to 30 for each endpoint", "// 4.x options.setMaxWebSockets(30); // Maximum connection is set to 30 for each endpoint", "switch (method) { case GET: break; case OTHER: String s = request.getRawMethod(); if (s.equals(\"PROPFIND\") { } else }", "switch (method.name()) { case \"GET\": break; case \"PROPFIND\"; break; }", "HttpMethod PROPFIND = HttpMethod.valueOf(\"PROPFIND\"); if (method == HttpMethod.GET) { } else if (method.equals(PROPFIND)) { } else { }", "client.request(HttpMethod.OTHER, ...).setRawName(\"PROPFIND\");", "client.request(HttpMethod.valueOf(\"PROPFIND\"), ...);", "HttpClientRequest request = client.get(80, \"example.com\", \"/\", response -> { int statusCode = response.statusCode(); response.exceptionHandler(err -> { // Handle connection error, for example, connection closed }); response.bodyHandler(body -> { // Handle body entirely }); }); request.exceptionHandler(err -> { // Handle connection error OR response error }); request.end();", "client.get(80, \"example.com\", \"/some-uri\") .send(ar -> { if (ar.suceeded()) { HttpResponse<Buffer> response = ar.result(); // Handle response } else { // Handle error } });", "HttpClientRequest request = client.get(80, \"example.com\", \"/\", response -> { int statusCode = response.statusCode(); response.exceptionHandler(err -> { // Handle connection error, for example, connection closed }); response.bodyHandler(body -> { // Handle body entirely }); }); request.exceptionHandler(err -> { // Handle connection error OR response error }); request.end();", "client.request(HttpMethod.GET, 80, \"example.com\", \"/\", ar -> { if (ar.succeeded()) { HttpClientRequest = ar.result(); request.send(ar2 -> { if (ar2.succeeded()) { HttpClientResponse = ar2.result(); int statusCode = response.statusCode(); response.body(ar3 -> { if (ar3.succeeded()) { Buffer body = ar3.result(); // Handle body entirely } else { // Handle server error, for example, connection closed } }); } else { // Handle server error, for example, connection closed } }); } else { // Connection error, for example, invalid server or invalid SSL certificate } });", "Future<Buffer> fut = client.request(HttpMethod.GET, 80, \"example.com\", \"/\") .compose(request -> request.send().compose(response -> { int statusCode = response.statusCode(); if (statusCode == 200) { return response.body(); } else { return Future.failedFuture(\"Unexpectd status code\"); } }) }); fut.onComplete(ar -> { if (ar.succeeded()) { Buffer body = ar.result(); // Handle body entirely } else { // Handle error } });", "request.end();", "request.end(Buffer.buffer(\"hello world));", "writeStream.pipeTo(request, ar -> { if (ar.succeeded()) { // Sent the stream } });", "// Send a request and process the response request.onComplete(ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); // Handle the response } }) request.end(); // The new send method combines all the operations request.send(ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); // Handle the response } }));", "response.bodyHandler(body -> { // Process the request body }); response.exceptionHandler(err -> { // Could not get the request body });", "response.body(ar -> { if (ar.succeeded()) { // Process the request body } else { // Could not get the request body } });", "response.endHandler(v -> { // Response ended }); response.exceptionHandler(err -> { // Response failed, something went wrong });", "response.end(ar -> { if (ar.succeeded()) { // Response ended } else { // Response failed, something went wrong } });", "HttpClient client = vertx.createHttpClient(options); client.request(HttpMethod.GET, 8443, \"localhost\", \"/\") .onSuccess(request -> { request.onSuccess(resp -> { //Code to handle HTTP response }); });", "HttpClient client = vertx.createHttpClient(options); client.request(HttpMethod.GET, 8443, \"localhost\", \"/\") .onSuccess(request -> { request.response().onSuccess(resp -> { //Code to handle HTTP response }); });", "Future<HttpClientResponse> f1 = client.get(8080, \"localhost\", \"/uri\", HttpHeaders.set(\"foo\", \"bar\"));", "Future<HttpClientResponse> f2 = client.post(8080, \"localhost\", \"/uri\", HttpHeaders.set(\"foo\", \"bar\"), Buffer.buffer(\"some-data\"));", "Future<HttpClientResponse> f3 = client.post(8080, \"localhost\", \"/uri\", HttpHeaders.set(\"foo\", \"bar\"), asyncFile);", "client.request(HttpMethod.GET, 8080, \"example.com\", \"/resource\", ar -> { if (ar.succeeded()) { HttpClientRequest request = ar.result(); request.putHeader(\"content-type\", \"application/json\") request.send(new JsonObject().put(\"hello\", \"world\")) .onSuccess(response -> { // }).onFailure(err -> { // }); } })", "client.request().connectionHandler(conn -> { // Connection related code }).end();", "client.connectionHandler(conn -> { // Connection related code });", "client.request(HttpMethod.CONNECT, uri, ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); if (response.statusCode() == 200) { NetSocket so = response.netSocket(); } } }).end();", "client.request(HttpMethod.CONNECT, uri, ar -> { }).netSocket(ar -> { if (ar.succeeded()) { // Got a response with a 200 status code NetSocket so = ar.result(); // Go for tunneling } }).end();", "Future<HttpClientResponse> f1 = client.send(HttpMethod.GET, 8080, \"localhost\", \"/uri\", HttpHeaders.set(\"foo\", \"bar\"));", "MultiMap headers = MultiMap.caseInsensitiveMultiMap();", "MultiMap headers = HttpHeaders.headers();", "MultiMap headers = HttpHeaders.set(\"content-type\", \"application.data\");", "CaseInsensitiveHeaders headers = new CaseInsensitiveHeaders();", "MultiMap multiMap = MultiMap#caseInsensitiveMultiMap();", "MultiMap headers = HttpHeaders.headers();", "client.request(HttpMethod.GET, 8080, \"example.com\", \"/resource\", ar -> { if (ar.succeeded()) { HttpClientRequest request = ar.result(); request.putHeader(\"content-type\", \"application/json\") request.send(new JsonObject().put(\"hello\", \"world\")) .onSuccess(response -> { // }).onFailure(err -> { // }); } })", "switch (options.getClientAuth()) { case REQUIRED: // ... behavior same as in releases prior to {VertX} {v4} break; default: // fallback statement }", "if (options.getClientAuth() == ClientAuth.REQUIRED) { // behavior in releases prior to {VertX} {v4}", "WriteStreamSubscriber<Buffer> subscriber = writeStream.toSubscriber();", "subscriber.onComplete(() -> { // Called after writeStream.end() is invoked, even if operation has not completed });", "subscriber.onWriteStreamEnd(() -> { // Called after writeStream.end() is invoked and completes successfully });", "subscriber.onWriteStreamError(() -> { // Called after writeStream.end() is invoked and fails });", "Future<JsonObject> fut = ConfigRetriever. getConfigAsFuture(retriever);", "fut = retriever.getConfig();", "List<Foo> foo1 = Json.decodeValue(json, new TypeReference<List<Foo>>() {});", "List<Foo> foo2 = io.vertx.core.json.jackson.JacksonCodec.decodeValue(json, new TypeReference<List<Foo>>() {});", "ObjectMapper mapper = Json.mapper;", "mapper = io.vertx.core.json.jackson.DatabindCodec.mapper();", "Json.mapper = someMapper;", "java -Dvertx.json.base64=legacy", "public String toBase64(String base64Url) { return base64Url .replace('+', '-') .replace('/', '_'); } public String toBase64Url(String base64) { return base64 .replace('-', '+') .replace('_', '/'); }", "String base64url = someJsonObject.getString(\"base64encodedElement\") String base64 = toBase64(base64url);", "// simple deserializer from Base64 to byte[] class ByteArrayDeserializer extends JsonDeserializer<byte[]> { ByteArrayDeserializer() { } public byte[] deserialize(JsonParser p, DeserializationContext ctxt) { String text = p.getText(); return Base64.getDecoder() .decode(text); } } // ObjectMapper mapper = new ObjectMapper(); // create a custom module to address the Base64 decoding SimpleModule module = new SimpleModule(); module.addDeserializer(byte[].class, new ByteArrayDeserializer()); mapper.registerModule(module); // JSON to POJO with custom deserializer mapper.readValue(json, MyClass.class);", "FaviconHandler.create(); ErrorHandler.create();", "FaviconHandler.create(vertx); ErrorHandler.create(vertx);", "router // protect everything but \"/excluded/path\" .route().handler(JWTAuthHandler(jwtAuth, \"/excluded/path\")", "router .route(\"/excluded/path\").handler(/* public access to \"/excluded/path\" */) // protect everything .route().handler(JWTAuthHandler(jwtAuth)", "Router router = Router.router(vertx); SockJSHandlerOptions options = new SockJSHandlerOptions() .setRegisterWriteHandler(true); // enable the event bus consumer registration SockJSHandler sockJSHandler = SockJSHandler.create(vertx, options); router.mountSubRouter(\"/myapp\", sockJSHandler.socketHandler(sockJSSocket -> { // Retrieve the writeHandlerID and store it (For example, in a local map) String writeHandlerID = sockJSSocket.writeHandlerID(); }));", "SockJSHandlerOptions options = new SockJSHandlerOptions() .setRegisterWriteHandler(true) // enable the event bus consumer registration .setLocalWriteHandler(false) // register a clustered event bus consumer", "RouterBuilder.create(vertx, \"petstore.yaml\").onComplete(ar -> { if (ar.succeeded()) { // Spec loaded with success RouterBuilder routerBuilder = ar.result(); } else { // Something went wrong during router builder initialization Throwable exception = ar.cause(); } });", "RouterBuilder.create(vertx, \"petstore.yaml\") .onSuccess(routerBuilder -> { // Spec loaded with success }) .onFailure(exception -> { // Something went wrong during router builder initialization });", "RequestParameters parameters = routingContext.get(io.vertx.ext.web.validation.ValidationHandler.REQUEST_CONTEXT_KEY); int aParam = parameters.queryParameter(\"aParam\").getInteger();", "JsonObject model = routerBuilder.operation(\"getPets\").getOperationModel();", "JsonObject contract = routerBuilder.getOpenAPI().getOpenAPI();", "JsonObject petModel = routerBuilder.getOpenAPI().getCached(JsonPointer.from(\"/components/schemas/Pet\"));", "router.get(\"/api/transactions\") .handler( ValidationHandlerBuilder.create(schemaParser) .queryParameter(optionalParam(\"from\", stringSchema())) .queryParameter(optionalParam(\"to\", stringSchema())) .build() ).handler( RouteToEBServiceHandler.build(eventBus, \"transactions.myapplication\", \"getTransactionsList\") );" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_4.3_migration_guide/changes-in-common-components_vertx
Chapter 9. Configuring email notifications
Chapter 9. Configuring email notifications Email notifications are created by Satellite Server periodically or after completion of certain events. The periodic notifications can be sent daily, weekly or monthly. For an overview of available notification types, see Section 9.1, "Email notification types" . Users do not receive any email notifications by default. An administrator can configure users to receive notifications based on criteria such as the type of notification, and frequency. Important Satellite Server does not enable outgoing emails by default, therefore you must review your email configuration. For more information, see Configuring Satellite Server for Outgoing Emails in Installing Satellite Server in a connected network environment . 9.1. Email notification types Satellite can create the following email notifications: Audit summary A summary of all activity audited by Satellite Server. Capsule sync failure A notification sent after Capsule synchronization fails. Compliance policy summary A summary of OpenSCAP policy reports and their results. Content view promote failure A notification sent after content view promotion fails. Content view publish failure A notification sent after content view publication fails. Host built A notification sent after a host is built. Host errata advisory A summary of applicable and installable errata for hosts managed by the user. Promote errata A notification sent only after a content view promotion. It contains a summary of errata applicable and installable to hosts registered to the promoted content view. This allows a user to monitor what updates have been applied to which hosts. Repository sync failure A notification sent after repository synchronization fails. Sync errata A notification sent only after synchronizing a repository. It contains a summary of new errata introduced by the synchronization. For a complete list of email notification types, navigate to Administer > Users in the Satellite web UI, click the Username of the required user, and select the Email Preferences tab. 9.2. Configuring email notification preferences You can configure Satellite to send email messages to individual users registered to Satellite. Satellite sends the email to the email address that has been added to the account, if present. Users can edit the email address by clicking on their name in the top-right of the Satellite web UI and selecting My account . Configure email notifications for a user from the Satellite web UI. Note If you want to send email notifications to a group email address instead of an individual email address, create a user account with the group email address and minimal Satellite permissions, then subscribe the user account to the desired notification types. Prerequisites The user you are configuring to receive email notifications has a role with this permission: view_mail_notifications . Procedure In the Satellite web UI, navigate to Administer > Users . Click the Username of the user you want to edit. On the User tab, verify the value of the Mail field. Email notifications will be sent to the address in this field. On the Email Preferences tab, select Mail Enabled . Select the notifications you want the user to receive using the drop-down menus to the notification types. Note The Audit Summary notification can be filtered by entering the required query in the Mail Query text box. Click Submit . The user will start receiving the notification emails. 9.3. Testing email delivery To verify the delivery of emails, send a test email to a user. If the email gets delivered, the settings are correct. Procedure In the Satellite web UI, navigate to Administer > Users . Click on the username. On the Email Preferences tab, click Test email . A test email message is sent immediately to the user's email address. If the email is delivered, the verification is complete. Otherwise, you must perform the following diagnostic steps: Verify the user's email address. Verify Satellite Server's email configuration. Examine firewall and mail server logs. If your Satellite Server uses the Postfix service for email delivery, the test email might be held in the queue. To verify, enter the mailq command to list the current mail queue. If the test email is held in the queue, mailq displays the following message: To fix the problem, start the Postfix service on your Satellite Server: 9.4. Testing email notifications To verify that users are correctly subscribed to notifications, trigger the notifications manually. Procedure To trigger the notifications, execute the following command: Replace My_Frequency with one of the following: daily weekly monthly This triggers all notifications scheduled for the specified frequency for all the subscribed users. If every subscribed user receives the notifications, the verification succeeds. Note Sending manually triggered notifications to individual users is currently not supported. 9.5. Changing email notification settings for a host Satellite can send event notifications for a host to the host's registered owner. You can configure Satellite to send email notifications either to an individual user or a user group. When set to a user group, all group members who are subscribed to the email type receive a message. Receiving email notifications for a host can be useful, but also overwhelming if you are expecting to receive frequent errors, for example, because of a known issue or error you are working around. Procedure In the Satellite web UI, navigate to Hosts > All Hosts , locate the host that you want to view, and click Edit in the Actions column. Go to the Additional Information tab. If the checkbox Include this host within Satellite reporting is checked, then the email notifications are enabled on that host. Optional: Toggle the checkbox to enable or disable the email notifications. Note If you want to receive email notifications, ensure that you have an email address set in your user settings.
[ "postqueue: warning: Mail system is down -- accessing queue directly -Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient------- BE68482A783 1922 Thu Oct 3 05:13:36 [email protected]", "systemctl start postfix", "foreman-rake reports:_My_Frequency_" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/Configuring_Email_Notifications_admin
3.3. Setting Preferences
3.3. Setting Preferences 3.3.1. Setting Preferences The ModeShape Preferences dialog allows for resource versioning to be set along with specific file types and folders that you never wish to have published to or unpublished from a ModeShape repository. Procedure 3.3. Setting preferences Open ModeShape Preferences dialog. The ModeShape Preferences dialog is accessed by navigating to Window -> Preferences -> Content Repository -> Publishing . This dialog allows you to set whether resource versioning will be used for your ModeShape repository or not. Figure 3.7. ModeShape Preferences dialog Activate resource versioning. Click the Enable resource versioning checkbox to activate resource versioning. 3.3.2. Ignored Resources You can manage the resources to be published using the Ignored Resources menu. Procedure 3.4. Manage Ignored Resources Open Ignored Resources Preferences dialog. To open the Ignored Resources dialog, click Window -> Preferences -> Content Repository -> Publishing -> Ignored Resources . On this screen you can manage the resources that will not be published to your ModeShape repository. The current excluded file types and folders are presented as the list of checkbox items that appear under the Ignored Resources heading. Figure 3.8. ModeShape: Ignored Resources Preferences dialog Add a new resource to the Ignored Resources list. To add a new file extension type or folder name to be filtered from publishing, click the New button, enter the details and click the Apply button so that your preference changes are saved. Remove a resource from the Ignored Resources list. To remove an entry, select an entry from the list in the Preferences dialog and click the Remove button. Ensure that you click the Apply button so that your preference changes are saved. Note If you decide to ignore a resource that has been published in the past, ensure that all instances are unpublished before ignoring it, otherwise you will not be able to unpublish the resource either. That said, this can be an effective way of ensuring a resource cannot be unpublished by accident. Remember though that, since preference settings are only local to your working environment, in a multi-user ModeShape repository someone else could unpublish it.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/sect-Setting_Preferences
Chapter 11. Using service accounts in applications
Chapter 11. Using service accounts in applications 11.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 11.2. Default service accounts Your OpenShift Container Platform cluster contains default service accounts for cluster management and generates more service accounts for each project. 11.2.1. Default cluster service accounts Several infrastructure controllers run using service account credentials. The following service accounts are created in the OpenShift Container Platform infrastructure project ( openshift-infra ) at server start, and given the following roles cluster-wide: Service Account Description replication-controller Assigned the system:replication-controller role deployment-controller Assigned the system:deployment-controller role build-controller Assigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged security context constraint to create privileged build pods. 11.2.2. Default project service accounts and roles Three service accounts are automatically created in each project: Service Account Usage builder Used by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry. deployer Used by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project. default Used to run all other pods unless they specify a different service account. All service accounts in a project are given the system:image-puller role, which allows pulling images from any imagestream in the project using the internal container image registry. 11.2.3. About automatically generated service account token secrets When a service account is created, a service account token secret is automatically generated for it. This service account token secret, along with an automatically generated docker configuration secret, is used to authenticate to the internal OpenShift Container Platform registry. Do not rely on these automatically generated secrets for your own use; they might be removed in a future OpenShift Container Platform release. Note Prior to OpenShift Container Platform 4.11, a second service account token secret was generated when a service account was created. This service account token secret was used to access the Kubernetes API. Starting with OpenShift Container Platform 4.11, this second service account token secret is no longer created. This is because the LegacyServiceAccountTokenNoAutoGeneration upstream Kubernetes feature gate was enabled, which stops the automatic generation of secret-based service account tokens to access the Kubernetes API. After upgrading to 4.12, any existing service account token secrets are not deleted and continue to function. Workloads are automatically injected with a projected volume to obtain a bound service account token. If your workload needs an additional service account token, add an additional projected volume in your workload manifest. Bound service account tokens are more secure than service account token secrets for the following reasons: Bound service account tokens have a bounded lifetime. Bound service account tokens contain audiences. Bound service account tokens can be bound to pods or secrets and the bound tokens are invalidated when the bound object is removed. For more information, see Configuring bound service account tokens using volume projection . You can also manually create a service account token secret to obtain a token, if the security exposure of a non-expiring token in a readable API object is acceptable to you. For more information, see Creating a service account token secret . Additional resources For information about requesting bound service account tokens, see Configuring bound service account tokens using volume projection . For information about creating a service account token secret, see Creating a service account token secret . 11.3. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authentication_and_authorization/using-service-accounts
1.8.4.2. Firewall Marks
1.8.4.2. Firewall Marks Firewall marks are an easy and efficient way to a group ports used for a protocol or group of related protocols. For example, if LVS is deployed to run an e-commerce site, firewall marks can be used to bundle HTTP connections on port 80 and secure, HTTPS connections on port 443. By assigning the same firewall mark to the virtual server for each protocol, state information for the transaction can be preserved because the LVS router forwards all requests to the same real server after a connection is opened. Because of its efficiency and ease-of-use, administrators of LVS should use firewall marks instead of persistence whenever possible for grouping connections. However, you should still add persistence to the virtual servers in conjunction with firewall marks to ensure the clients are reconnected to the same server for an adequate period of time.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s3-lve-fwmarks-cso
4.3. Other Restrictions
4.3. Other Restrictions For the list of all other restrictions and issues affecting virtualization read the Red Hat Enterprise Linux 6 Release Notes . The Red Hat Enterprise Linux 6 Release Notes cover the present new features, known issues and restrictions as they are updated or discovered.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-virtualization_host_configuration_and_guest_installation_guide-virtualization_restrictions-other_restrictions
Chapter 1. Preparing to install on IBM Power Virtual Server
Chapter 1. Preparing to install on IBM Power Virtual Server The installation workflows documented in this section are for IBM Power(R) Virtual Server infrastructure environments. 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on IBM Power Virtual Server Before installing OpenShift Container Platform on IBM Power(R) Virtual Server you must create a service account and configure an IBM Cloud(R) account. See Configuring an IBM Cloud(R) account for details about creating an account, configuring DNS and supported IBM Power(R) Virtual Server regions. You must manually manage your cloud credentials when installing a cluster to IBM Power(R) Virtual Server. Do this by configuring the Cloud Credential Operator (CCO) for manual mode before you install the cluster. 1.3. Choosing a method to install OpenShift Container Platform on IBM Power Virtual Server You can install OpenShift Container Platform on IBM Power(R) Virtual Server using installer-provisioned infrastructure. This process involves using an installation program to provision the underlying infrastructure for your cluster. Installing OpenShift Container Platform on IBM Power(R) Virtual Server using user-provisioned infrastructure is not supported at this time. See Installation process for more information about installer-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on IBM Power(R) Virtual Server infrastructure that is provisioned by the OpenShift Container Platform installation program by using one of the following methods: Installing a customized cluster on IBM Power(R) Virtual Server : You can install a customized cluster on IBM Power(R) Virtual Server infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on IBM Power(R) Virtual Server into an existing VPC : You can install OpenShift Container Platform on IBM Power(R) Virtual Server into an existing Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on IBM Power(R) Virtual Server : You can install a private cluster on IBM Power(R) Virtual Server. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on IBM Power(R) Virtual Server in a restricted network : You can install OpenShift Container Platform on IBM Power(R) Virtual Server on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. 1.4. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on IBM Power(R) Virtual Server, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys 1.5. steps Configuring an IBM Cloud(R) account
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_power_virtual_server/preparing-to-install-on-ibm-power-vs
4.8. The file_t and default_t Types
4.8. The file_t and default_t Types When using a file system that supports extended attributes (EA), the file_t type is the default type of a file that has not yet been assigned EA value. This type is only used for this purpose and does not exist on correctly-labeled file systems, because all files on a system running SELinux should have a proper SELinux context, and the file_t type is never used in file-context configuration [4] . The default_t type is used on files that do not match any pattern in file-context configuration, so that such files can be distinguished from files that do not have a context on disk, and generally are kept inaccessible to confined domains. For example, if you create a new top-level directory, such as mydirectory/ , this directory may be labeled with the default_t type. If services need access to this directory, you need to update the file-contexts configuration for this location. See Section 4.7.2, "Persistent Changes: semanage fcontext" for details on adding a context to the file-context configuration. [4] Files in the /etc/selinux/targeted/contexts/files/ directory define contexts for files and directories. Files in this directory are read by the restorecon and setfiles utilities to restore files and directories to their default contexts.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Working_with_SELinux-The_file_t_and_default_t_Types
Chapter 16. MultiNetworkPolicy [k8s.cni.cncf.io/v1beta1]
Chapter 16. MultiNetworkPolicy [k8s.cni.cncf.io/v1beta1] Description MultiNetworkPolicy is a CRD schema to provide NetworkPolicy mechanism for net-attach-def which is specified by the Network Plumbing Working Group. MultiNetworkPolicy is identical to Kubernetes NetworkPolicy, See: https://kubernetes.io/docs/concepts/services-networking/network-policies/ . Type object 16.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior for this MultiNetworkPolicy. 16.1.1. .spec Description Specification of the desired behavior for this MultiNetworkPolicy. Type object Required podSelector Property Type Description egress array List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). This field is beta-level in 1.8 egress[] object NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and to. This type is beta-level in 1.8 ingress array List of ingress rules to be applied to the selected pods. Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic source is the pod's local node, OR if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy does not allow any traffic (and serves solely to ensure that the pods it selects are isolated by default) ingress[] object NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and from. podSelector object This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. policyTypes array (string) List of rule types that the NetworkPolicy relates to. Valid options are 'Ingress', 'Egress', or 'Ingress,Egress'. If this field is not specified, it will default based on the existence of Ingress or Egress rules; policies that contain an Egress section are assumed to affect Egress, and all policies (whether or not they contain an Ingress section) are assumed to affect Ingress. If you want to write an egress-only policy, you must explicitly specify policyTypes [ 'Egress' ]. Likewise, if you want to write a policy that specifies that no egress is allowed, you must specify a policyTypes value that include 'Egress' (since such a policy would not include an Egress section and would otherwise default to just [ 'Ingress' ]). This field is beta-level in 1.8 16.1.2. .spec.egress Description List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). This field is beta-level in 1.8 Type array 16.1.3. .spec.egress[] Description NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and to. This type is beta-level in 1.8 Type object Property Type Description ports array List of destination ports for outgoing traffic. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. ports[] object NetworkPolicyPort describes a port to allow traffic on to array List of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list. to[] object NetworkPolicyPeer describes a peer to allow traffic from. Only certain combinations of fields are allowed 16.1.4. .spec.egress[].ports Description List of destination ports for outgoing traffic. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. Type array 16.1.5. .spec.egress[].ports[] Description NetworkPolicyPort describes a port to allow traffic on Type object Property Type Description port integer-or-string The port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. protocol string The protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. 16.1.6. .spec.egress[].to Description List of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list. Type array 16.1.7. .spec.egress[].to[] Description NetworkPolicyPeer describes a peer to allow traffic from. Only certain combinations of fields are allowed Type object Property Type Description ipBlock object IPBlock defines policy on a particular IPBlock. If this field is set then neither of the other fields can be. namespaceSelector object Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If PodSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector. podSelector object This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. 16.1.8. .spec.egress[].to[].ipBlock Description IPBlock defines policy on a particular IPBlock. If this field is set then neither of the other fields can be. Type object Required cidr Property Type Description cidr string CIDR is a string representing the IP Block Valid examples are '192.168.1.1/24' except array (string) Except is a slice of CIDRs that should not be included within an IP Block Valid examples are '192.168.1.1/24' Except values will be rejected if they are outside the CIDR range 16.1.9. .spec.egress[].to[].namespaceSelector Description Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If PodSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is 'key', the operator is 'In', and the values array contains only 'value'. The requirements are ANDed. 16.1.10. .spec.egress[].to[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 16.1.11. .spec.egress[].to[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 16.1.12. .spec.egress[].to[].podSelector Description This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is 'key', the operator is 'In', and the values array contains only 'value'. The requirements are ANDed. 16.1.13. .spec.egress[].to[].podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 16.1.14. .spec.egress[].to[].podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 16.1.15. .spec.ingress Description List of ingress rules to be applied to the selected pods. Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic source is the pod's local node, OR if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy does not allow any traffic (and serves solely to ensure that the pods it selects are isolated by default) Type array 16.1.16. .spec.ingress[] Description NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and from. Type object Property Type Description from array List of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list. from[] object NetworkPolicyPeer describes a peer to allow traffic from. Only certain combinations of fields are allowed ports array List of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. ports[] object NetworkPolicyPort describes a port to allow traffic on 16.1.17. .spec.ingress[].from Description List of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list. Type array 16.1.18. .spec.ingress[].from[] Description NetworkPolicyPeer describes a peer to allow traffic from. Only certain combinations of fields are allowed Type object Property Type Description ipBlock object IPBlock defines policy on a particular IPBlock. If this field is set then neither of the other fields can be. namespaceSelector object Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If PodSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector. podSelector object This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. 16.1.19. .spec.ingress[].from[].ipBlock Description IPBlock defines policy on a particular IPBlock. If this field is set then neither of the other fields can be. Type object Required cidr Property Type Description cidr string CIDR is a string representing the IP Block Valid examples are '192.168.1.1/24' except array (string) Except is a slice of CIDRs that should not be included within an IP Block Valid examples are '192.168.1.1/24' Except values will be rejected if they are outside the CIDR range 16.1.20. .spec.ingress[].from[].namespaceSelector Description Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If PodSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is 'key', the operator is 'In', and the values array contains only 'value'. The requirements are ANDed. 16.1.21. .spec.ingress[].from[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 16.1.22. .spec.ingress[].from[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 16.1.23. .spec.ingress[].from[].podSelector Description This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is 'key', the operator is 'In', and the values array contains only 'value'. The requirements are ANDed. 16.1.24. .spec.ingress[].from[].podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 16.1.25. .spec.ingress[].from[].podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 16.1.26. .spec.ingress[].ports Description List of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. Type array 16.1.27. .spec.ingress[].ports[] Description NetworkPolicyPort describes a port to allow traffic on Type object Property Type Description port integer-or-string The port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. protocol string The protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. 16.1.28. .spec.podSelector Description This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) 16.1.29. .spec.podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 16.1.30. .spec.podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 16.2. API endpoints The following API endpoints are available: /apis/k8s.cni.cncf.io/v1beta1/multi-networkpolicies GET : list objects of kind MultiNetworkPolicy /apis/k8s.cni.cncf.io/v1beta1/namespaces/{namespace}/multi-networkpolicies DELETE : delete collection of MultiNetworkPolicy GET : list objects of kind MultiNetworkPolicy POST : create a MultiNetworkPolicy /apis/k8s.cni.cncf.io/v1beta1/namespaces/{namespace}/multi-networkpolicies/{name} DELETE : delete a MultiNetworkPolicy GET : read the specified MultiNetworkPolicy PATCH : partially update the specified MultiNetworkPolicy PUT : replace the specified MultiNetworkPolicy 16.2.1. /apis/k8s.cni.cncf.io/v1beta1/multi-networkpolicies HTTP method GET Description list objects of kind MultiNetworkPolicy Table 16.1. HTTP responses HTTP code Reponse body 200 - OK MultiNetworkPolicyList schema 401 - Unauthorized Empty 16.2.2. /apis/k8s.cni.cncf.io/v1beta1/namespaces/{namespace}/multi-networkpolicies HTTP method DELETE Description delete collection of MultiNetworkPolicy Table 16.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MultiNetworkPolicy Table 16.3. HTTP responses HTTP code Reponse body 200 - OK MultiNetworkPolicyList schema 401 - Unauthorized Empty HTTP method POST Description create a MultiNetworkPolicy Table 16.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.5. Body parameters Parameter Type Description body MultiNetworkPolicy schema Table 16.6. HTTP responses HTTP code Reponse body 200 - OK MultiNetworkPolicy schema 201 - Created MultiNetworkPolicy schema 202 - Accepted MultiNetworkPolicy schema 401 - Unauthorized Empty 16.2.3. /apis/k8s.cni.cncf.io/v1beta1/namespaces/{namespace}/multi-networkpolicies/{name} Table 16.7. Global path parameters Parameter Type Description name string name of the MultiNetworkPolicy HTTP method DELETE Description delete a MultiNetworkPolicy Table 16.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 16.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MultiNetworkPolicy Table 16.10. HTTP responses HTTP code Reponse body 200 - OK MultiNetworkPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MultiNetworkPolicy Table 16.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.12. HTTP responses HTTP code Reponse body 200 - OK MultiNetworkPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MultiNetworkPolicy Table 16.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.14. Body parameters Parameter Type Description body MultiNetworkPolicy schema Table 16.15. HTTP responses HTTP code Reponse body 200 - OK MultiNetworkPolicy schema 201 - Created MultiNetworkPolicy schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/multinetworkpolicy-k8s-cni-cncf-io-v1beta1
21.2. Displaying Log Files
21.2. Displaying Log Files You can display the Directory Server log files using the command line and web console: 21.2.1. Displaying Log Files Using the Command Line To display the log files using the command line, use the utilities included in Red Hat Enterprise Linux, such as less , more , and cat . For example: To display the locations of log files: Note If logging for a log type is not enabled, the corresponding log file does not exist. 21.2.2. Displaying Log Files Using the Web Console To display the Directory Server log files: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Monitoring menu. Open the Logging menu, and select the log file you want to display. Optionally, you can apply the following settings to the log file viewer: Set the number of lines to display in the Log Lines To Show field. Enable automatically displaying new log entries by selecting Continuously Refresh . Click the Refresh button to apply the changes.
[ "less /var/log/dirsrv/slapd- instance_name /access", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-accesslog nsslapd-errorlog nsslapd-auditlog nsslapd-auditfaillog nsslapd-accesslog: /var/log/dirsrv/slapd- instance_name /access nsslapd-errorlog: /var/log/dirsrv/slapd- instance_name /errors nsslapd-auditlog: /var/log/dirsrv/slapd- instance_name /audit nsslapd-auditfaillog: /var/log/dirsrv/slapd- instance_name /audit-failure" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/displaying_log_files
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_the_data_grid_command_line_interface/red-hat-data-grid
Chapter 4. Configuring a cluster-wide proxy
Chapter 4. Configuring a cluster-wide proxy If you are using an existing Virtual Private Cloud (VPC), you can configure a cluster-wide proxy during an OpenShift Dedicated cluster installation or after the cluster is installed. When you enable a proxy, the core cluster components are denied direct access to the internet, but the proxy does not affect user workloads. Note Only cluster system egress traffic is proxied, including calls to the cloud provider API. You can enable a proxy only for OpenShift Dedicated clusters that use the Customer Cloud Subscription (CCS) model. If you use a cluster-wide proxy, you are responsible for maintaining the availability of the proxy to the cluster. If the proxy becomes unavailable, then it might impact the health and supportability of the cluster. 4.1. Prerequisites for configuring a cluster-wide proxy To configure a cluster-wide proxy, you must meet the following requirements. These requirements are valid when you configure a proxy during installation or postinstallation. General requirements You are the cluster owner. Your account has sufficient privileges. You have an existing Virtual Private Cloud (VPC) for your cluster. You are using the Customer Cloud Subscription (CCS) model for your cluster. The proxy can access the VPC for the cluster and the private subnets of the VPC. The proxy is also accessible from the VPC for the cluster and from the private subnets of the VPC. You have added the following endpoints to your VPC endpoint: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works at the container level and not at the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not enough. Important When using a cluster-wide proxy, you must configure the s3.<aws_region>.amazonaws.com endpoint as type Gateway . Network requirements If your proxy re-encrypts egress traffic, you must create exclusions to the domain and port combinations. The following table offers guidance into these exceptions. Your proxy must exclude re-encrypting the following OpenShift URLs: Address Protocol/Port Function observatorium-mst.api.openshift.com https/443 Required. Used for Managed OpenShift-specific telemetry. sso.redhat.com https/443 The https://cloud.redhat.com/openshift site uses authentication from sso.redhat.com to download the cluster pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, and chargeback reporting. Additional resources For the installation prerequisites for OpenShift Dedicated clusters that use the Customer Cloud Subscription (CCS) model, see Customer Cloud Subscriptions on AWS or Customer Cloud Subscriptions on GCP . 4.2. Responsibilities for additional trust bundles If you supply an additional trust bundle, you are responsible for the following requirements: Ensuring that the contents of the additional trust bundle are valid Ensuring that the certificates, including intermediary certificates, contained in the additional trust bundle have not expired Tracking the expiry and performing any necessary renewals for certificates contained in the additional trust bundle Updating the cluster configuration with the updated additional trust bundle 4.3. Configuring a proxy during installation You can configure an HTTP or HTTPS proxy when you install an OpenShift Dedicated with Customer Cloud Subscription (CCS) cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy during installation by using Red Hat OpenShift Cluster Manager. 4.4. Configuring a proxy during installation using OpenShift Cluster Manager If you are installing an OpenShift Dedicated cluster into an existing Virtual Private Cloud (VPC), you can use Red Hat OpenShift Cluster Manager to enable a cluster-wide HTTP or HTTPS proxy during installation. You can enable a proxy only for clusters that use the Customer Cloud Subscription (CCS) model. Prior to the installation, you must verify that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC. For detailed steps to configure a cluster-wide proxy during installation by using OpenShift Cluster Manager, see Creating a cluster on AWS or Creating a cluster on GCP . Additional resources Creating a cluster on AWS Creating a cluster on GCP with Workload Identity Federation authentication 4.5. Configuring a proxy after installation You can configure an HTTP or HTTPS proxy after you install an OpenShift Dedicated with Customer Cloud Subscription (CCS) cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy after installation by using Red Hat OpenShift Cluster Manager. 4.6. Configuring a proxy after installation using OpenShift Cluster Manager You can use Red Hat OpenShift Cluster Manager to add a cluster-wide proxy configuration to an existing OpenShift Dedicated cluster in a Virtual Private Cloud (VPC). You can enable a proxy only for clusters that use the Customer Cloud Subscription (CCS) model. You can also use OpenShift Cluster Manager to update an existing cluster-wide proxy configuration. For example, you might need to update the network address for the proxy or replace the additional trust bundle if any of the certificate authorities for the proxy expire. Important The cluster applies the proxy configuration to the control plane and compute nodes. While applying the configuration, each cluster node is temporarily placed in an unschedulable state and drained of its workloads. Each node is restarted as part of the process. Prerequisites You have an OpenShift Dedicated cluster that uses the Customer Cloud Subscription (CCS) model . Your cluster is deployed in a VPC. Procedure Navigate to OpenShift Cluster Manager and select your cluster. Under the Virtual Private Cloud (VPC) section on the Networking page, click Edit cluster-wide proxy . On the Edit cluster-wide proxy page, provide your proxy configuration details: Enter a value in at least one of the following fields: Specify a valid HTTP proxy URL . Specify a valid HTTPS proxy URL . In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. If you are replacing an existing trust bundle file, select Replace file to view the field. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments. Click Confirm . Verification Under the Virtual Private Cloud (VPC) section on the Networking page, verify that the proxy configuration for your cluster is as expected.
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/networking/configuring-a-cluster-wide-proxy
Chapter 4. Network Observability Operator in OpenShift Container Platform
Chapter 4. Network Observability Operator in OpenShift Container Platform Network Observability is an OpenShift operator that deploys a monitoring pipeline to collect and enrich network traffic flows that are produced by the Network Observability eBPF agent. 4.1. Viewing statuses The Network Observability Operator provides the Flow Collector API. When a Flow Collector resource is created, it deploys pods and services to create and store network flows in the Loki log store, as well as to display dashboards, metrics, and flows in the OpenShift Container Platform web console. Procedure Run the following command to view the state of FlowCollector : USD oc get flowcollector/cluster Example output Check the status of pods running in the netobserv namespace by entering the following command: USD oc get pods -n netobserv Example output flowlogs-pipeline pods collect flows, enriches the collected flows, then send flows to the Loki storage. netobserv-plugin pods create a visualization plugin for the OpenShift Container Platform Console. Check the status of pods running in the namespace netobserv-privileged by entering the following command: USD oc get pods -n netobserv-privileged Example output netobserv-ebpf-agent pods monitor network interfaces of the nodes to get flows and send them to flowlogs-pipeline pods. If you are using the Loki Operator, check the status of pods running in the openshift-operators-redhat namespace by entering the following command: USD oc get pods -n openshift-operators-redhat Example output 4.2. Network Observablity Operator architecture The Network Observability Operator provides the FlowCollector API, which is instantiated at installation and configured to reconcile the eBPF agent , the flowlogs-pipeline , and the netobserv-plugin components. Only a single FlowCollector per cluster is supported. The eBPF agent runs on each cluster node with some privileges to collect network flows. The flowlogs-pipeline receives the network flows data and enriches the data with Kubernetes identifiers. If you choose to use Loki, the flowlogs-pipeline sends flow logs data to Loki for storing and indexing. The netobserv-plugin , which is a dynamic OpenShift Container Platform web console plugin, queries Loki to fetch network flows data. Cluster-admins can view the data in the web console. If you do not use Loki, you can generate metrics with Prometheus. Those metrics and their related dashboards are accessible in the web console. For more information, see "Network Observability without Loki". If you are using the Kafka option, the eBPF agent sends the network flow data to Kafka, and the flowlogs-pipeline reads from the Kafka topic before sending to Loki, as shown in the following diagram. Additional resources Network Observability without Loki 4.3. Viewing Network Observability Operator status and configuration You can inspect the status and view the details of the FlowCollector using the oc describe command. Procedure Run the following command to view the status and configuration of the Network Observability Operator: USD oc describe flowcollector/cluster
[ "oc get flowcollector/cluster", "NAME AGENT SAMPLING (EBPF) DEPLOYMENT MODEL STATUS cluster EBPF 50 DIRECT Ready", "oc get pods -n netobserv", "NAME READY STATUS RESTARTS AGE flowlogs-pipeline-56hbp 1/1 Running 0 147m flowlogs-pipeline-9plvv 1/1 Running 0 147m flowlogs-pipeline-h5gkb 1/1 Running 0 147m flowlogs-pipeline-hh6kf 1/1 Running 0 147m flowlogs-pipeline-w7vv5 1/1 Running 0 147m netobserv-plugin-cdd7dc6c-j8ggp 1/1 Running 0 147m", "oc get pods -n netobserv-privileged", "NAME READY STATUS RESTARTS AGE netobserv-ebpf-agent-4lpp6 1/1 Running 0 151m netobserv-ebpf-agent-6gbrk 1/1 Running 0 151m netobserv-ebpf-agent-klpl9 1/1 Running 0 151m netobserv-ebpf-agent-vrcnf 1/1 Running 0 151m netobserv-ebpf-agent-xf5jh 1/1 Running 0 151m", "oc get pods -n openshift-operators-redhat", "NAME READY STATUS RESTARTS AGE loki-operator-controller-manager-5f6cff4f9d-jq25h 2/2 Running 0 18h lokistack-compactor-0 1/1 Running 0 18h lokistack-distributor-654f87c5bc-qhkhv 1/1 Running 0 18h lokistack-distributor-654f87c5bc-skxgm 1/1 Running 0 18h lokistack-gateway-796dc6ff7-c54gz 2/2 Running 0 18h lokistack-index-gateway-0 1/1 Running 0 18h lokistack-index-gateway-1 1/1 Running 0 18h lokistack-ingester-0 1/1 Running 0 18h lokistack-ingester-1 1/1 Running 0 18h lokistack-ingester-2 1/1 Running 0 18h lokistack-querier-66747dc666-6vh5x 1/1 Running 0 18h lokistack-querier-66747dc666-cjr45 1/1 Running 0 18h lokistack-querier-66747dc666-xh8rq 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-b2xfb 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-jm94f 1/1 Running 0 18h", "oc describe flowcollector/cluster" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_observability/nw-network-observability-operator
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 0.2-8 Thu Apr 27 2017 Lenka Spackova Red Hat Access Labs renamed to Red Hat Customer Portal Labs. Revision 0.2-7 Tue Mar 21 2017 Jiri Herrmann Updated a Virtualization Known Issue. Revision 0.2-6 Mon Mar 13 2017 Lenka Spackova Added a known issue to Authentication and Interoperability. Revision 0.2-5 Fri Dec 16 2016 Lenka Spackova Updated the Red Hat Software Collections chapter. Revision 0.2-4 Thu Oct 27 2016 Lenka Spackova Added two known issues to General Updates. Revision 0.2-3 Wed Oct 25 2016 Jiri Herrmann Added a virtualization known issue ( fsgsbase and smep ). Revision 0.2-1 Wed Sep 07 2016 Lenka Spackova Added an SSSD known issue (Authentication and Interoperability). Revision 0.2-0 Mon Aug 29 2016 Lenka Spackova Added two known issues (Installation and Booting). Revision 0.1-9 Mon Aug 01 2016 Lenka Spackova Updated a known issue regarding limited CPU support for Windows 10 guests (Virtualization). Revision 0.1-8 Fri Jul 01 2016 Lenka Spackova Fixed commands in an SSSD feature. Revision 0.1-6 Wed Jun 08 2016 Lenka Spackova Added an SSSD feature (new default values for group names). Revision 0.1-4 Fri Jun 03 2016 Lenka Spackova Added Bugzilla numbers to individual descriptions. Revision 0.1-3 Fri May 27 2016 Lenka Spackova Added new known issues (SSSD, ReaR). Revision 0.1-2 Mon May 16 2016 Lenka Spackova Added a new feature to Clustering (fence agent). Revision 0.1-1 Thu May 12 2016 Lenka Spackova Added known issues related to ReaR . Revision 0.1-0 Tue May 10 2016 Lenka Spackova Release of the Red Hat Enterprise Linux 6.8 Release Notes. Revision 0.0-5 Tue Mar 15 2016 Lenka Spackova Release of the Red Hat Enterprise Linux 6.8 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/appe-6.8_release_notes-revision_history
7.185. rpm
7.185. rpm 7.185.1. RHBA-2015:1452 - rpm bug fix and enhancement update Updated rpm packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The RPM Package Manager (RPM) is a command-line driven package management system capable of installing, uninstalling, verifying, querying, and updating software packages. Bug Fixes BZ# 606239 The output of the %posttrans scriptlet was not correctly displayed to the user, which could lead to important errors being ignored. This update introduces a new API that collects the output from the %posttrans scriptlet. As a result, the yum utility can now access the %posttrans output, and displays it to the user. BZ# 833427 Although the RPM Package Manager does not support packages with files larger than 4 GB, the rpm utility allowed creating source packages where individual files exceeded 4 GB. The installation of such packages then failed with a "Digest mismatch" error. Now, rpm no longer allows the creation of such packages, which in turn prevents the described installation failure. BZ# 1040318 On certain architectures, the value of the "LONGSIZE" tag was displayed incorrectly. This update ensures that on these architectures, the value of "LONGSIZE" is converted to the native byte order correctly, and that it is therefore displayed correctly. BZ# 997774 The behavior of the file mode and directory mode parameters for the %defattr directive was changed in a prior update, which caused building packages that still expected the behavior to fail or to experience problems. The directive has been reverted to the behavior, and a warning about the potential problems with %defattr has been added to the "rpmbuild" command. BZ# 1139805 If the standard output of the rpm utility was redirected to a file and the file system was full, rpm failed without writing any error messages. Now, rpm prints an error message as a standard error output if the described scenario occurs. BZ# 1076277 The rpm utility was unable to download and install packages the remote locations of which were specified with an IPv6 address and a specific path format. Now, rpm automatically uses the "--globoff" option with IPv6 addresses, which turns off cURL globbing, and allows packages to be properly downloaded and installed in the described scenario. BZ# 921969 , BZ# 1024517 If a Perl script in a package contained a string declared as a here-document that included the "use" or "require" words, or a multiline string with these words, the package in some cases had incorrect dependencies when it was created using the "rpmbuild" command. Now, the "use" and "require" strings are ignored as keywords in here-documents and multiline strings, which prevents the problem from occurring. BZ# 993868 Previously, build scriptlets using the pipe character ("|") in some cases failed. This update properly sets the default handling of the SIGPIPE signal in build scriptlets, thus fixing the bug. Enhancements BZ# 760793 The OrderWithRequires feature has been added to the RPM Package Manager, which provides the new OrderWithRequires package tag. If a package specified in OrderWithRequires is present in a package transaction, it is installed before the package with the corresponding OrderWithRequires tag is installed. However, unlike the Requires package tag, OrderWithRequires does not generate additional dependencies, so if the package specified in the tag is not present in the transaction, it is not downloaded. BZ# 1178083 The %power64 macro has been added to the rpm packages. This macro can be used to specify any or all 64-bit PowerPC architectures in RPM spec files by using the "%{power64}" string. Users of rpm are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. All running applications linked against the RPM library must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-rpm
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/overview_of_red_hat_enterprise_linux_for_sap_solutions_subscription/conscious-language-message_overview-of-rhel-for-sap-solutions-subscription-combined
Chapter 5. Managing user-owned OAuth access tokens
Chapter 5. Managing user-owned OAuth access tokens Users can review their own OAuth access tokens and delete any that are no longer needed. 5.1. Listing user-owned OAuth access tokens You can list your user-owned OAuth access tokens. Token names are not sensitive and cannot be used to log in. Procedure List all user-owned OAuth access tokens: USD oc get useroauthaccesstokens Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full List user-owned OAuth access tokens for a particular OAuth client: USD oc get useroauthaccesstokens --field-selector=clientName="console" Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full 5.2. Viewing the details of a user-owned OAuth access token You can view the details of a user-owned OAuth access token. Procedure Describe the details of a user-owned OAuth access token: USD oc describe useroauthaccesstokens <token_name> Example output Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none> 1 The token name, which is the sha256 hash of the token. Token names are not sensitive and cannot be used to log in. 2 The client name, which describes where the token originated from. 3 The value in seconds from the creation time before this token expires. 4 If there is a token inactivity timeout set for the OAuth server, this is the value in seconds from the creation time before this token can no longer be used. 5 The scopes for this token. 6 The user name associated with this token. 5.3. Deleting user-owned OAuth access tokens The oc logout command only invalidates the OAuth token for the active session. You can use the following procedure to delete any user-owned OAuth tokens that are no longer needed. Deleting an OAuth access token logs out the user from all sessions that use the token. Procedure Delete the user-owned OAuth access token: USD oc delete useroauthaccesstokens <token_name> Example output useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted
[ "oc get useroauthaccesstokens", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc get useroauthaccesstokens --field-selector=clientName=\"console\"", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc describe useroauthaccesstokens <token_name>", "Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>", "oc delete useroauthaccesstokens <token_name>", "useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/authentication_and_authorization/managing-oauth-access-tokens
F.4.2. Runlevel Utilities
F.4.2. Runlevel Utilities One of the best ways to configure runlevels is to use an initscript utility . These tools are designed to simplify the task of maintaining files in the SysV init directory hierarchy and relieves system administrators from having to directly manipulate the numerous symbolic links in the subdirectories of /etc/rc.d/ . Red Hat Enterprise Linux provides three such utilities: /sbin/chkconfig - The /sbin/chkconfig utility is a simple command line tool for maintaining the /etc/rc.d/init.d/ directory hierarchy. /usr/sbin/ntsysv - The ncurses-based /sbin/ntsysv utility provides an interactive text-based interface, which some find easier to use than chkconfig . Services Configuration Tool - The graphical Services Configuration Tool ( system-config-services ) program is a flexible utility for configuring runlevels. Refer to the chapter titled Services and Daemons in the Red Hat Enterprise Linux Deployment Guide for more information regarding these tools.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-boot-init-shutdown-sysv-util
Chapter 251. OpenStack Cinder Component
Chapter 251. OpenStack Cinder Component Available as of Camel version 2.19 The openstack-cinder component allows messages to be sent to an OpenStack block storage services. 251.1. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-openstack</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel. 251.2. URI Format openstack-cinder://hosturl[?options] You can append query options to the URI in the following format ?options=value&option2=value&... 251.3. URI Options The OpenStack Cinder component has no options. The OpenStack Cinder endpoint is configured using URI syntax: with the following path and query parameters: 251.3.1. Path Parameters (1 parameters): Name Description Default Type host Required OpenStack host url String 251.3.2. Query Parameters (9 parameters): Name Description Default Type apiVersion (producer) OpenStack API version V3 String config (producer) OpenStack configuration Config domain (producer) Authentication domain default String operation (producer) The operation to do String password (producer) Required OpenStack password String project (producer) Required The project ID String subsystem (producer) Required OpenStack Cinder subsystem String username (producer) Required OpenStack username String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 251.4. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.openstack-cinder.enabled Enable openstack-cinder component true Boolean camel.component.openstack-cinder.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 251.5. Usage You can use following settings for each subsystem: 251.6. volumes 251.6.1. Operations you can perform with the Volume producer Operation Description create Create new volume. get Get the volume. getAll Get all volumes. getAllTypes Get volume types. update Update the volume. delete Delete the volume. 251.6.2. Message headers evaluated by the Volume producer Header Type Description operation String The operation to perform. ID String ID of the volume. name String The volume name. description String Volume description. size Integer Size of volume. volumeType String Volume type. imageRef String ID of image. snapshotId String ID of snapshot. isBootable Boolean Is bootable. If you need more precise volume settings you can create new object of the type org.openstack4j.model.storage.block.Volume and send in the message body. 251.7. snapshots 251.7.1. Operations you can perform with the Snapshot producer Operation Description create Create new snapshot. get Get the snapshot. getAll Get all snapshots. update Get update the snapshot. delete Delete the snapshot. 251.7.2. Message headers evaluated by the Snapshot producer Header Type Description operation String The operation to perform. ID String ID of the server. name String The server name. description String The snapshot description. VolumeId String The Volume ID. force Boolean Force. If you need more precise server settings you can create new object of the type org.openstack4j.model.storage.block.VolumeSnapshot and send in the message body. 251.8. See Also Configuring Camel Component Endpoint Getting Started openstack Component
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-openstack</artifactId> <version>USD{camel-version}</version> </dependency>", "openstack-cinder://hosturl[?options]", "openstack-cinder:host" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/openstack-cinder-component
Removing OpenShift Serverless
Removing OpenShift Serverless Red Hat OpenShift Serverless 1.35 Removing Serverless from your cluster Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/removing_openshift_serverless/index
Chapter 8. Assigning roles to hosts
Chapter 8. Assigning roles to hosts You can assign roles to your discovered hosts. These roles define the function of the host within the cluster. The roles can be one of the standard Kubernetes types: control plane (master) or worker . The host must meet the minimum requirements for the role you selected. You can find the hardware requirements by referring to the Prerequisites section of this document or using the preflight requirement API. If you do not select a role, the system selects one for you. You can change the role at any time before installation starts. 8.1. Selecting a role by using the web console You can select a role after the host finishes its discovery. Procedure Go to the Host Discovery tab and scroll down to the Host Inventory table. Select the Auto-assign drop-down for the required host. Select Control plane node to assign this host a control plane role. Select Worker to assign this host a worker role. Check the validation status. 8.2. Selecting a role by using the API You can select a role for the host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. A host can have one of the following roles: master : A host with the master role operates as a control plane node. worker : A host with the worker role operates as a worker node. By default, the Assisted Installer sets a host to auto-assign , which means the Assisted Installer will determine whether the host is a master or worker role automatically. Use this procedure to set the host's role. Prerequisites You have added hosts to the cluster. Procedure Refresh the API token: USD source refresh-token Get the host IDs: USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Example output [ "1062663e-7989-8b2d-7fbb-e6f4d5bb28e5" ] Modify the host_role setting: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" } ' | jq Replace <host_id> with the ID of the host. 8.3. Auto-assigning roles Assisted Installer selects a role automatically for hosts if you do not assign a role yourself. The role selection mechanism factors the host's memory, CPU, and disk space. It aims to assign a control plane role to the weakest hosts that meet the minimum requirements for control plane nodes. The number of control planes you specify in the cluster definition determines the number of control plane nodes that the Assisted Installer assigns. For details, see Setting the cluster details . All other hosts default to worker nodes. The goal is to provide enough resources to run the control plane and reserve the more capacity-intensive hosts for running the actual workloads. You can override the auto-assign decision at any time before installation. The validations make sure that the auto selection is a valid one. 8.4. Additional resources Prerequisites
[ "source refresh-token", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'", "[ \"1062663e-7989-8b2d-7fbb-e6f4d5bb28e5\" ]", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_role-assignment
Part I. Red Hat Certificate System User Interfaces
Part I. Red Hat Certificate System User Interfaces
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/part-admin_console
Chapter 47. Red Hat Enterprise Linux System Roles Powered by Ansible
Chapter 47. Red Hat Enterprise Linux System Roles Powered by Ansible New packages: ansible Red Hat Enterprise Linux System Roles, now available as a Technology Preview, is a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles . This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. With Red Hat Enterprise Linux 7.4, the Red Hat Enterprise Linux System Roles packages are distributed through the Extras channel. For details regarding Red Hat Enterprise Linux System Roles, see https://access.redhat.com/articles/3050101 . Notes: Currently, Ansible is not a part of the Red Hat Enterprise Linux FIPS validation process. We hope to address this in future releases. Ansible is being included as an unsupported runtime dependency. (BZ#1313263)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/technology_previews_red_hat_enterprise_linux_system_roles_powered_by_ansible
Chapter 4. Installing a cluster on RHV with user-provisioned infrastructure
Chapter 4. Installing a cluster on RHV with user-provisioned infrastructure In OpenShift Container Platform version 4.13, you can install a customized OpenShift Container Platform cluster on Red Hat Virtualization (RHV) and other infrastructure that you provide. The OpenShift Container Platform documentation uses the term user-provisioned infrastructure to refer to this infrastructure type. The following diagram shows an example of a potential OpenShift Container Platform cluster running on a RHV cluster. The RHV hosts run virtual machines that contain both control plane and compute pods. One of the hosts also runs a Manager virtual machine and a bootstrap virtual machine that contains a temporary control plane pod.] 4.1. Prerequisites The following items are required to install an OpenShift Container Platform cluster on a RHV environment. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on Red Hat Virtualization (RHV) . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Requirements for the RHV environment To install and run an OpenShift Container Platform version 4.13 cluster, the RHV environment must meet the following requirements. Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation. The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations. By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources. If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly. Requirements The RHV version is 4.4. The RHV environment has one data center whose state is Up . The RHV data center contains an RHV cluster. The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster: Minimum 28 vCPUs: four for each of the seven virtual machines created during installation. 112 GiB RAM or more, including: 16 GiB or more for the bootstrap machine, which provides the temporary control plane. 16 GiB or more for each of the three control plane machines which provide the control plane. 16 GiB or more for each of the three compute machines, which run the application workloads. The RHV storage domain must meet these etcd backend performance requirements . In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster. To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process. The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP. A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the target cluster Warning Apply the principle of least privilege: Avoid using an administrator account with SuperUser privileges on RHV during the installation process. The installation program saves the credentials you provide to a temporary ovirt-config.yaml file that might be compromised. 4.4. Verifying the requirements for the RHV environment Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures. Important These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly. Procedure Check that the RHV version supports installation of OpenShift Container Platform version 4.13. In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About . In the window that opens, make a note of the RHV Software Version . Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV . Inspect the data center, cluster, and storage. In the RHV Administration Portal, click Compute Data Centers . Confirm that the data center where you plan to install OpenShift Container Platform is accessible. Click the name of that data center. In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active . Record the Domain Name for use later on. Confirm Free Space has at least 230 GiB. Confirm that the storage domain meets these etcd backend performance requirements , which you can measure by using the fio performance benchmarking tool . In the data center details, click the Clusters tab. Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on. Inspect the RHV host resources. In the RHV Administration Portal, click Compute > Clusters . Click the cluster where you plan to install OpenShift Container Platform. In the cluster details, click the Hosts tab. Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster. Record the number of available Logical CPU Cores for use later on. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores. Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines: 16 GiB required for the bootstrap machine 16 GiB required for each of the three control plane machines 16 GiB for each of the three compute machines Record the amount of Max free Memory for scheduling new virtual machines for use later on. Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager's REST API. From a virtual machine on this network, use curl to reach the RHV Manager's REST API: USD curl -k -u <username>@<profile>:<password> \ 1 https://<engine-fqdn>/ovirt-engine/api 2 1 For <username> , specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For <password> , specify the password for that user name. 2 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api 4.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. Firewall Configure your firewall so your cluster has access to required sites. See also: Red Hat Virtualization Manager firewall requirements Host firewall requirements Load balancers Configure one or preferably two layer-4 load balancers: Provide load balancing for ports 6443 and 22623 on the control plane and bootstrap machines. Port 6443 provides access to the Kubernetes API server and must be reachable both internally and externally. Port 22623 must be accessible to nodes within the cluster. Provide load balancing for port 443 and 80 for machines that run the Ingress router, which are usually compute nodes in the default configuration. Both ports must be accessible from within and outside the cluster. DNS Configure infrastructure-provided DNS to allow the correct resolution of the main components and services. If you use only one load balancer, these DNS records can point to the same IP address. Create DNS records for api.<cluster_name>.<base_domain> (internal and external resolution) and api-int.<cluster_name>.<base_domain> (internal resolution) that point to the load balancer for the control plane machines. Create a DNS record for *.apps.<cluster_name>.<base_domain> that points to the load balancer for the Ingress router. For example, ports 443 and 80 of the compute machines. 4.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 4.1. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.2. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. 4.6. Setting up the installation machine To run the binary openshift-install installation program and Ansible scripts, set up the RHV Manager or an Red Hat Enterprise Linux (RHEL) computer with network access to the RHV environment and the REST API on the Manager. Procedure Update or install Python3 and Ansible. For example: # dnf update python3 ansible Install the python3-ovirt-engine-sdk4 package to get the Python Software Development Kit. Install the ovirt.image-template Ansible role. On the RHV Manager and other Red Hat Enterprise Linux (RHEL) machines, this role is distributed as the ovirt-ansible-image-template package. For example, enter: # dnf install ovirt-ansible-image-template Install the ovirt.vm-infra Ansible role. On the RHV Manager and other RHEL machines, this role is distributed as the ovirt-ansible-vm-infra package. # dnf install ovirt-ansible-vm-infra Create an environment variable and assign an absolute or relative path to it. For example, enter: USD export ASSETS_DIR=./wrk Note The installation program uses this variable to create a directory where it saves important installation-related files. Later, the installation process reuses this variable to locate those asset files. Avoid deleting this assets directory; it is required for uninstalling the cluster. 4.7. Installing OpenShift Container Platform on RHV in insecure mode By default, the installer creates a CA certificate, prompts you for confirmation, and stores the certificate to use during installation. You do not need to create or install one manually. Although it is not recommended, you can override this functionality and install OpenShift Container Platform without verifying a certificate by installing OpenShift Container Platform on RHV in insecure mode. Warning Installing in insecure mode is not recommended, because it enables a potential attacker to perform a Man-in-the-Middle attack and capture sensitive credentials on the network. Procedure Create a file named ~/.ovirt/ovirt-config.yaml . Add the following content to ovirt-config.yaml : ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: "" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true 1 Specify the hostname or address of your oVirt engine. 2 Specify the fully qualified domain name of your oVirt engine. 3 Specify the admin password for your oVirt engine. Run the installer. 4.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 4.10. Downloading the Ansible playbooks Download the Ansible playbooks for installing OpenShift Container Platform version 4.13 on RHV. Procedure On your installation machine, run the following commands: USD mkdir playbooks USD cd playbooks USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/common-auth.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/create-templates-and-vms.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/inventory.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-workers.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/workers.yml' steps After you download these Ansible playbooks, you must also create the environment variable for the assets directory and customize the inventory.yml file before you create an installation configuration file by running the installation program. 4.11. The inventory.yml file You use the inventory.yml file to define and create elements of the OpenShift Container Platform cluster you are installing. This includes elements such as the Red Hat Enterprise Linux CoreOS (RHCOS) image, virtual machine templates, bootstrap machine, control plane nodes, and worker nodes. You also use inventory.yml to destroy the cluster. The following inventory.yml example shows you the parameters and their default values. The quantities and numbers in these default values meet the requirements for running a production OpenShift Container Platform cluster in a RHV environment. Example inventory.yml file --- all: vars: ovirt_cluster: "Default" ocp: assets_dir: "{{ lookup('env', 'ASSETS_DIR') }}" ovirt_config_path: "{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml" # --- # {op-system} section # --- rhcos: image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz" local_cmp_image_path: "/tmp/rhcos.qcow2.gz" local_image_path: "/tmp/rhcos.qcow2" # --- # Profiles section # --- control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: "{{ metadata.infraID }}-bootstrap" ocp_type: bootstrap profile: "{{ control_plane }}" type: server - name: "{{ metadata.infraID }}-master0" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master1" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master2" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-worker0" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker1" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker2" ocp_type: worker profile: "{{ compute }}" Important Enter values for parameters whose descriptions begin with "Enter." Otherwise, you can use the default value or replace it with a new value. General section ovirt_cluster : Enter the name of an existing RHV cluster in which to install the OpenShift Container Platform cluster. ocp.assets_dir : The path of a directory the openshift-install installation program creates to store the files that it generates. ocp.ovirt_config_path : The path of the ovirt-config.yaml file the installation program generates, for example, ./wrk/install-config.yaml . This file contains the credentials required to interact with the REST API of the Manager. Red Hat Enterprise Linux CoreOS (RHCOS) section image_url : Enter the URL of the RHCOS image you specified for download. local_cmp_image_path : The path of a local download directory for the compressed RHCOS image. local_image_path : The path of a local directory for the extracted RHCOS image. Profiles section This section consists of two profiles: control_plane : The profile of the bootstrap and control plane nodes. compute : The profile of workers nodes in the compute plane. These profiles have the following parameters. The default values of the parameters meet the minimum requirements for running a production cluster. You can increase or customize these values to meet your workload requirements. cluster : The value gets the cluster name from ovirt_cluster in the General Section. memory : The amount of memory, in GB, for the virtual machine. sockets : The number of sockets for the virtual machine. cores : The number of cores for the virtual machine. template : The name of the virtual machine template. If plan to install multiple clusters, and these clusters use templates that contain different specifications, prepend the template name with the ID of the cluster. operating_system : The type of guest operating system in the virtual machine. With oVirt/RHV version 4.4, this value must be rhcos_x64 so the value of Ignition script can be passed to the VM. type : Enter server as the type of the virtual machine. Important You must change the value of the type parameter from high_performance to server . disks : The disk specifications. The control_plane and compute nodes can have different storage domains. size : The minimum disk size. name : Enter the name of a disk connected to the target cluster in RHV. interface : Enter the interface type of the disk you specified. storage_domain : Enter the storage domain of the disk you specified. nics : Enter the name and network the virtual machines use. You can also specify the virtual network interface profile. By default, NICs obtain their MAC addresses from the oVirt/RHV MAC pool. Virtual machines section This final section, vms , defines the virtual machines you plan to create and deploy in the cluster. By default, it provides the minimum number of control plane and worker nodes for a production environment. vms contains three required elements: name : The name of the virtual machine. In this case, metadata.infraID prepends the virtual machine name with the infrastructure ID from the metadata.yml file. ocp_type : The role of the virtual machine in the OpenShift Container Platform cluster. Possible values are bootstrap , master , worker . profile : The name of the profile from which each virtual machine inherits specifications. Possible values in this example are control_plane or compute . You can override the value a virtual machine inherits from its profile. To do this, you add the name of the profile attribute to the virtual machine in inventory.yml and assign it an overriding value. To see an example of this, examine the name: "{{ metadata.infraID }}-bootstrap" virtual machine in the preceding inventory.yml example: It has a type attribute whose value, server , overrides the value of the type attribute this virtual machine would otherwise inherit from the control_plane profile. Metadata variables For virtual machines, metadata.infraID prepends the name of the virtual machine with the infrastructure ID from the metadata.json file you create when you build the Ignition files. The playbooks use the following code to read infraID from the specific file located in the ocp.assets_dir . --- - name: include metadata.json vars include_vars: file: "{{ ocp.assets_dir }}/metadata.json" name: metadata ... 4.12. Specifying the RHCOS image settings Update the Red Hat Enterprise Linux CoreOS (RHCOS) image settings of the inventory.yml file. Later, when you run this file one of the playbooks, it downloads a compressed Red Hat Enterprise Linux CoreOS (RHCOS) image from the image_url URL to the local_cmp_image_path directory. The playbook then uncompresses the image to the local_image_path directory and uses it to create oVirt/RHV templates. Procedure Locate the RHCOS image download page for the version of OpenShift Container Platform you are installing, such as Index of /pub/openshift-v4/dependencies/rhcos/latest/latest . From that download page, copy the URL of an OpenStack qcow2 image, such as https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz . Edit the inventory.yml playbook you downloaded earlier. In it, paste the URL as the value for image_url . For example: rhcos: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz" 4.13. Creating the install config file You create an installation configuration file by running the installation program, openshift-install , and responding to its prompts with information you specified or gathered earlier. When you finish responding to the prompts, the installation program creates an initial version of the install-config.yaml file in the assets directory you specified earlier, for example, ./wrk/install-config.yaml The installation program also creates a file, USDHOME/.ovirt/ovirt-config.yaml , that contains all the connection parameters that are required to reach the Manager and use its REST API. NOTE: The installation process does not use values you supply for some parameters, such as Internal API virtual IP and Ingress virtual IP , because you have already configured them in your infrastructure DNS. It also uses the values you supply for parameters in inventory.yml , like the ones for oVirt cluster , oVirt storage , and oVirt network . And uses a script to remove or replace these same values from install-config.yaml with the previously mentioned virtual IPs . Procedure Run the installation program: USD openshift-install create install-config --dir USDASSETS_DIR Respond to the installation program's prompts with information about your system. Example output ? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********> For Internal API virtual IP and Ingress virtual IP , supply the IP addresses you specified when you configured the DNS service. Together, the values you enter for the oVirt cluster and Base Domain prompts form the FQDN portion of URLs for the REST API and any applications you create, such as https://api.ocp4.example.org:6443/ and https://console-openshift-console.apps.ocp4.example.org . You can get the pull secret from the Red Hat OpenShift Cluster Manager . 4.14. Customizing install-config.yaml Here, you use three Python scripts to override some of the installation program's default behaviors: By default, the installation program uses the machine API to create nodes. To override this default behavior, you set the number of compute nodes to zero replicas. Later, you use Ansible playbooks to create the compute nodes. By default, the installation program sets the IP range of the machine network for nodes. To override this default behavior, you set the IP range to match your infrastructure. By default, the installation program sets the platform to ovirt . However, installing a cluster on user-provisioned infrastructure is more similar to installing a cluster on bare metal. Therefore, you delete the ovirt platform section from install-config.yaml and change the platform to none . Instead, you use inventory.yml to specify all of the required settings. Note These snippets work with Python 3 and Python 2. Procedure Set the number of compute nodes to zero replicas: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["compute"][0]["replicas"] = 0 open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Set the IP range of the machine network. For example, to set the range to 172.16.0.0/16 , enter: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["networking"]["machineNetwork"][0]["cidr"] = "172.16.0.0/16" open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Remove the ovirt section and change the platform to none : USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) platform = conf["platform"] del platform["ovirt"] platform["none"] = {} open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Warning Red Hat Virtualization does not currently support installation with user-provisioned infrastructure on the oVirt platform. Therefore, you must set the platform to none , allowing OpenShift Container Platform to identify each node as a bare-metal node and the cluster as a bare-metal cluster. This is the same as installing a cluster on any platform , and has the following limitations: There will be no cluster provider so you must manually add each machine and there will be no node scaling capabilities. The oVirt CSI driver will not be installed and there will be no CSI capabilities. 4.15. Generate manifest files Use the installation program to generate a set of manifest files in the assets directory. The command to generate the manifest files displays a warning message before it consumes the install-config.yaml file. If you plan to reuse the install-config.yaml file, create a backup copy of it before you back it up before you generate the manifest files. Procedure Optional: Create a backup copy of the install-config.yaml file: USD cp install-config.yaml install-config.yaml.backup Generate a set of manifests in your assets directory: USD openshift-install create manifests --dir USDASSETS_DIR This command displays the following messages. Example output INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings The command generates the following manifest files: Example output USD tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml steps Make control plane nodes non-schedulable. 4.16. Making control-plane nodes non-schedulable Because you are manually creating and deploying the control plane machines, you must configure a manifest file to make the control plane nodes non-schedulable. Procedure To make the control plane nodes non-schedulable, enter: USD python3 -c 'import os, yaml path = "%s/manifests/cluster-scheduler-02-config.yml" % os.environ["ASSETS_DIR"] data = yaml.safe_load(open(path)) data["spec"]["mastersSchedulable"] = False open(path, "w").write(yaml.dump(data, default_flow_style=False))' 4.17. Building the Ignition files To build the Ignition files from the manifest files you just generated and modified, you run the installation program. This action creates a Red Hat Enterprise Linux CoreOS (RHCOS) machine, initramfs , which fetches the Ignition files and performs the configurations needed to create a node. In addition to the Ignition files, the installation program generates the following: An auth directory that contains the admin credentials for connecting to the cluster with the oc and kubectl utilities. A metadata.json file that contains information such as the OpenShift Container Platform cluster name, cluster ID, and infrastructure ID for the current installation. The Ansible playbooks for this installation process use the value of infraID as a prefix for the virtual machines they create. This prevents naming conflicts when there are multiple installations in the same oVirt/RHV cluster. Note Certificates in Ignition configuration files expire after 24 hours. Complete the cluster installation and keep the cluster running in a non-degraded state for 24 hours so that the first certificate rotation can finish. Procedure To build the Ignition files, enter: USD openshift-install create ignition-configs --dir USDASSETS_DIR Example output USD tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign 4.18. Creating templates and virtual machines After confirming the variables in the inventory.yml , you run the first Ansible provisioning playbook, create-templates-and-vms.yml . This playbook uses the connection parameters for the RHV Manager from USDHOME/.ovirt/ovirt-config.yaml and reads metadata.json in the assets directory. If a local Red Hat Enterprise Linux CoreOS (RHCOS) image is not already present, the playbook downloads one from the URL you specified for image_url in inventory.yml . It extracts the image and uploads it to RHV to create templates. The playbook creates a template based on the control_plane and compute profiles in the inventory.yml file. If these profiles have different names, it creates two templates. When the playbook finishes, the virtual machines it creates are stopped. You can get information from them to help configure other infrastructure elements. For example, you can get the virtual machines' MAC addresses to configure DHCP to assign permanent IP addresses to the virtual machines. Procedure In inventory.yml , under the control_plane and compute variables, change both instances of type: high_performance to type: server . Optional: If you plan to perform multiple installations to the same cluster, create different templates for each OpenShift Container Platform installation. In the inventory.yml file, prepend the value of template with infraID . For example: control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: "{{ metadata.infraID }}-rhcos_tpl" operating_system: "rhcos_x64" ... Create the templates and virtual machines: USD ansible-playbook -i inventory.yml create-templates-and-vms.yml 4.19. Creating the bootstrap machine You create a bootstrap machine by running the bootstrap.yml playbook. This playbook starts the bootstrap virtual machine, and passes it the bootstrap.ign Ignition file from the assets directory. The bootstrap node configures itself so it can serve Ignition files to the control plane nodes. To monitor the bootstrap process, you use the console in the RHV Administration Portal or connect to the virtual machine by using SSH. Procedure Create the bootstrap machine: USD ansible-playbook -i inventory.yml bootstrap.yml Connect to the bootstrap machine using a console in the Administration Portal or SSH. Replace <bootstrap_ip> with the bootstrap node IP address. To use SSH, enter: USD ssh core@<boostrap.ip> Collect bootkube.service journald unit logs for the release image service from the bootstrap node: [core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. 4.20. Creating the control plane nodes You create the control plane nodes by running the masters.yml playbook. This playbook passes the master.ign Ignition file to each of the virtual machines. The Ignition file contains a directive for the control plane node to get the Ignition from a URL such as https://api-int.ocp4.example.org:22623/config/master . The port number in this URL is managed by the load balancer, and is accessible only inside the cluster. Procedure Create the control plane nodes: USD ansible-playbook -i inventory.yml masters.yml While the playbook creates your control plane, monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR Example output INFO API v1.26.0 up INFO Waiting up to 40m0s for bootstrapping to complete... When all the pods on the control plane nodes and etcd are up and running, the installation program displays the following output. Example output INFO It is now safe to remove the bootstrap resources 4.21. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 4.22. Removing the bootstrap machine After the wait-for command shows that the bootstrap process is complete, you must remove the bootstrap virtual machine to free up compute, memory, and storage resources. Also, remove settings for the bootstrap machine from the load balancer directives. Procedure To remove the bootstrap machine from the cluster, enter: USD ansible-playbook -i inventory.yml retire-bootstrap.yml Remove settings for the bootstrap machine from the load balancer directives. 4.23. Creating the worker nodes and completing the installation Creating worker nodes is similar to creating control plane nodes. However, worker nodes workers do not automatically join the cluster. To add them to the cluster, you review and approve the workers' pending CSRs (Certificate Signing Requests). After approving the first requests, you continue approving CSR until all of the worker nodes are approved. When you complete this process, the worker nodes become Ready and can have pods scheduled to run on them. Finally, monitor the command line to see when the installation process completes. Procedure Create the worker nodes: USD ansible-playbook -i inventory.yml workers.yml To list all of the CSRs, enter: USD oc get csr -A Eventually, this command displays one CSR per node. For example: Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending To filter the list and see only pending CSRs, enter: USD watch "oc get csr -A | grep pending -i" This command refreshes the output every two seconds and displays only pending CSRs. For example: Example output Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending Inspect each pending request. For example: Example output USD oc describe csr csr-m724n Example output Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none> If the CSR information is correct, approve the request: USD oc adm certificate approve csr-m724n Wait for the installation process to finish: USD openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug When the installation completes, the command line displays the URL of the OpenShift Container Platform web console and the administrator user name and password. 4.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service
[ "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "dnf update python3 ansible", "dnf install ovirt-ansible-image-template", "dnf install ovirt-ansible-vm-infra", "export ASSETS_DIR=./wrk", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir playbooks", "cd playbooks", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/common-auth.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/create-templates-and-vms.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/inventory.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-workers.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/workers.yml'", "--- all: vars: ovirt_cluster: \"Default\" ocp: assets_dir: \"{{ lookup('env', 'ASSETS_DIR') }}\" ovirt_config_path: \"{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml\" # --- # {op-system} section # --- rhcos: image_url: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz\" local_cmp_image_path: \"/tmp/rhcos.qcow2.gz\" local_image_path: \"/tmp/rhcos.qcow2\" # --- # Profiles section # --- control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: \"{{ metadata.infraID }}-bootstrap\" ocp_type: bootstrap profile: \"{{ control_plane }}\" type: server - name: \"{{ metadata.infraID }}-master0\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master1\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master2\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-worker0\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker1\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker2\" ocp_type: worker profile: \"{{ compute }}\"", "--- - name: include metadata.json vars include_vars: file: \"{{ ocp.assets_dir }}/metadata.json\" name: metadata", "rhcos: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz\"", "openshift-install create install-config --dir USDASSETS_DIR", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"compute\"][0][\"replicas\"] = 0 open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"networking\"][\"machineNetwork\"][0][\"cidr\"] = \"172.16.0.0/16\" open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) platform = conf[\"platform\"] del platform[\"ovirt\"] platform[\"none\"] = {} open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "cp install-config.yaml install-config.yaml.backup", "openshift-install create manifests --dir USDASSETS_DIR", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings", "tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml", "python3 -c 'import os, yaml path = \"%s/manifests/cluster-scheduler-02-config.yml\" % os.environ[\"ASSETS_DIR\"] data = yaml.safe_load(open(path)) data[\"spec\"][\"mastersSchedulable\"] = False open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openshift-install create ignition-configs --dir USDASSETS_DIR", "tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: \"{{ metadata.infraID }}-rhcos_tpl\" operating_system: \"rhcos_x64\"", "ansible-playbook -i inventory.yml create-templates-and-vms.yml", "ansible-playbook -i inventory.yml bootstrap.yml", "ssh core@<boostrap.ip>", "[core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service", "ansible-playbook -i inventory.yml masters.yml", "openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR", "INFO API v1.26.0 up INFO Waiting up to 40m0s for bootstrapping to complete", "INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "ansible-playbook -i inventory.yml retire-bootstrap.yml", "ansible-playbook -i inventory.yml workers.yml", "oc get csr -A", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "watch \"oc get csr -A | grep pending -i\"", "Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc describe csr csr-m724n", "Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>", "oc adm certificate approve csr-m724n", "openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_rhv/installing-rhv-user-infra
Chapter 17. Adding Variables to the Watch List
Chapter 17. Adding Variables to the Watch List Overview By adding variables to the watch list, you can focus on particular variables to see whether their values change as expected as they flow through the routing context. Procedure To add a variable to the watch list: If necessary, start the debugger. See Chapter 14, Running the Camel Debugger . In the Variables view, right-click a variable you want to track to open the context menu. Select Watch . A new view, Expressions , opens to the Breakpoints view. The Expressions view displays the name of the variable being watched and its current value, for example: Repeat [watch1] and [watch2] to add additional variables to the watch list. Note The variables you add remain in the watch list until you remove them. To stop watching a variable, right-click it in the list to open the context menu, and then click Remove . With the Expressions view open, step through the routing context to track how the value of each variable in the watch list changes as it reaches each step in the route. Related topics Chapter 16, Changing Variable Values
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/addWatchList
Chapter 8. Using CPU Manager and Topology Manager
Chapter 8. Using CPU Manager and Topology Manager CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. CPU Manager is useful for workloads that have some of these attributes: Require as much CPU time as possible. Are sensitive to processor cache misses. Are low-latency network applications. Coordinate with other processes and benefit from sharing a single processor cache. Topology Manager collects hints from the CPU Manager, Device Manager, and other Hint Providers to align pod resources, such as CPU, SR-IOV VFs, and other device resources, for all Quality of Service (QoS) classes on the same non-uniform memory access (NUMA) node. Topology Manager uses topology information from the collected hints to decide if a pod can be accepted or rejected on a node, based on the configured Topology Manager policy and pod resources requested. Topology Manager is useful for workloads that use hardware accelerators to support latency-critical execution and high throughput parallel computation. To use Topology Manager you must configure CPU Manager with the static policy. 8.1. Setting up CPU Manager To configure CPU manager, create a KubeletConfig custom resource (CR) and apply it to the desired set of nodes. Procedure Label a node by running the following command: # oc label node perf-node.example.com cpumanager=true To enable CPU Manager for all compute nodes, edit the CR by running the following command: # oc edit machineconfigpool worker Add the custom-kubelet: cpumanager-enabled label to metadata.labels section. metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config by running the following command: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config by running the following command: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the compute node for the updated kubelet.conf file by running the following command: # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a project by running the following command: USD oc new-project <project_name> Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verification Verify that the pod is scheduled to the node that you labeled by running the following command: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that a CPU has been exclusively assigned to the pod by running the following command: # oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2 Example output NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process by running the following commands: # oc debug node/perf-node.example.com sh-4.2# systemctl status | grep -B5 pause Note If the output returns multiple pause process entries, you must identify the correct pause process. Example output # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Verify that pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice subdirectory by running the following commands: # cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus cgroup.procs` ; do echo -n "USDi "; cat USDi ; done Note Pods of other QoS tiers end up in child cgroups of the parent kubepods . Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task by running the following command: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod on the system cannot run on the core allocated for the Guaranteed pod. For example, to verify the pod in the besteffort QoS tier, run the following commands: # cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 8.2. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 8.3. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prerequisites Configure the CPU Manager policy to be static . Procedure To activate Topology Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 8.4. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.
[ "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc new-project <project_name>", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2", "NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m", "oc debug node/perf-node.example.com", "sh-4.2# systemctl status | grep -B5 pause", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope", "for i in `ls cpuset.cpus cgroup.procs` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus", "oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/scalability_and_performance/using-cpu-manager
Chapter 8. Storage and File Systems
Chapter 8. Storage and File Systems This chapter outlines supported file systems and configuration options that affect application performance for both I/O and file systems in Red Hat Enterprise Linux 7. Section 8.1, "Considerations" discusses the I/O and file system related factors that affect performance. Section 8.2, "Monitoring and Diagnosing Performance Problems" teaches you how to use Red Hat Enterprise Linux 7 tools to diagnose performance problems related to I/O or file system configuration details. Section 8.4, "Configuration Tools" discusses the tools and strategies you can use to solve I/O and file system related performance problems in Red Hat Enterprise Linux 7. 8.1. Considerations The appropriate settings for storage and file system performance are highly dependent on the purpose of the storage. I/O and file system performance can be affected by any of the following factors: Data write or read patterns Data alignment with underlying geometry Block size File system size Journal size and location Recording access times Ensuring data reliability Pre-fetching data Pre-allocating disk space File fragmentation Resource contention Read this chapter to gain an understanding of the formatting and mount options that affect file system throughput, scalability, responsiveness, resource usage, and availability. 8.1.1. I/O Schedulers The I/O scheduler determines when and for how long I/O operations run on a storage device. It is also known as the I/O elevator. Red Hat Enterprise Linux 7 provides three I/O schedulers. deadline The default I/O scheduler for all block devices, except for SATA disks. Deadline attempts to provide a guaranteed latency for requests from the point at which requests reach the I/O scheduler. This scheduler is suitable for most use cases, but particularly those in which read operations occur more often than write operations. Queued I/O requests are sorted into a read or write batch and then scheduled for execution in increasing LBA order. Read batches take precedence over write batches by default, as applications are more likely to block on read I/O. After a batch is processed, deadline checks how long write operations have been starved of processor time and schedules the read or write batch as appropriate. The number of requests to handle per batch, the number of read batches to issue per write batch, and the amount of time before requests expire are all configurable; see Section 8.4.4, "Tuning the Deadline Scheduler" for details. cfq The default scheduler only for devices identified as SATA disks. The Completely Fair Queueing scheduler, cfq , divides processes into three separate classes: real time, best effort, and idle. Processes in the real time class are always performed before processes in the best effort class, which are always performed before processes in the idle class. This means that processes in the real time class can starve both best effort and idle processes of processor time. Processes are assigned to the best effort class by default. cfq uses historical data to anticipate whether an application will issue more I/O requests in the near future. If more I/O is expected, cfq idles to wait for the new I/O, even if I/O from other processes is waiting to be processed. Because of this tendency to idle, the cfq scheduler should not be used in conjunction with hardware that does not incur a large seek penalty unless it is tuned for this purpose. It should also not be used in conjunction with other non-work-conserving schedulers, such as a host-based hardware RAID controller, as stacking these schedulers tends to cause a large amount of latency. cfq behavior is highly configurable; see Section 8.4.5, "Tuning the CFQ Scheduler" for details. noop The noop I/O scheduler implements a simple FIFO (first-in first-out) scheduling algorithm. Requests are merged at the generic block layer through a simple last-hit cache. This can be the best scheduler for CPU-bound systems using fast storage. For details on setting a different default I/O scheduler, or specifying a different scheduler for a particular device, see Section 8.4, "Configuration Tools" . 8.1.2. File Systems Read this section for details about supported file systems in Red Hat Enterprise Linux 7, their recommended use cases, and the format and mount options available to file systems in general. Detailed tuning recommendations for these file systems are available in Section 8.4.7, "Configuring File Systems for Performance" . 8.1.2.1. XFS XFS is a robust and highly scalable 64-bit file system. It is the default file system in Red Hat Enterprise Linux 7. XFS uses extent-based allocation, and features a number of allocation schemes, including pre-allocation and delayed allocation, both of which reduce fragmentation and aid performance. It also supports metadata journaling, which can facilitate crash recovery. XFS can be defragmented and enlarged while mounted and active, and Red Hat Enterprise Linux 7 supports several XFS-specific backup and restore utilities. As of Red Hat Enterprise Linux 7.0 GA, XFS is supported to a maximum file system size of 500 TB, and a maximum file offset of 8 EB (sparse files). For details about administering XFS, see the Red Hat Enterprise Linux 7 Storage Administration Guide . For assistance tuning XFS for a specific purpose, see Section 8.4.7.1, "Tuning XFS" . 8.1.2.2. Ext4 Ext4 is a scalable extension of the ext3 file system. Its default behavior is optimal for most work loads. However, it is supported only to a maximum file system size of 50 TB, and a maximum file size of 16 TB. For details about administering ext4, see the Red Hat Enterprise Linux 7 Storage Administration Guide . For assistance tuning ext4 for a specific purpose, see Section 8.4.7.2, "Tuning ext4" . 8.1.2.3. Btrfs (Technology Preview) The default file system for Red Hat Enterprise Linux 7 is XFS. Btrfs (B-tree file system), a relatively new copy-on-write (COW) file system, is shipped as a Technology Preview . Some of the unique Btrfs features include: The ability to take snapshots of specific files, volumes or sub-volumes rather than the whole file system; supporting several versions of redundant array of inexpensive disks (RAID); back referencing map I/O errors to file system objects; transparent compression (all files on the partition are automatically compressed); checksums on data and meta-data. Although Btrfs is considered a stable file system, it is under constant development, so some functionality, such as the repair tools, are basic compared to more mature file systems. Currently, selecting Btrfs is suitable when advanced features (such as snapshots, compression, and file data checksums) are required, but performance is relatively unimportant. If advanced features are not required, the risk of failure and comparably weak performance over time make other file systems preferable. Another drawback, compared to other file systems, is the maximum supported file system size of 50 TB. For more information, see Section 8.4.7.3, "Tuning Btrfs" , and the chapter on Btrfs in the Red Hat Enterprise Linux 7 Storage Administration Guide . 8.1.2.4. GFS2 Global File System 2 (GFS2) is part of the High Availability Add-On that provides clustered file system support to Red Hat Enterprise Linux 7. GFS2 provides a consistent file system image across all servers in a cluster, which allows servers to read from and write to a single shared file system. GFS2 is supported to a maximum file system size of 100 TB. For details on administering GFS2, see the Global File System 2 guide or the Red Hat Enterprise Linux 7 Storage Administration Guide . For information on tuning GFS2 for a specific purpose, see Section 8.4.7.4, "Tuning GFS2" . 8.1.3. Generic Tuning Considerations for File Systems This section covers tuning considerations common to all file systems. For tuning recommendations specific to your file system, see Section 8.4.7, "Configuring File Systems for Performance" . 8.1.3.1. Considerations at Format Time Some file system configuration decisions cannot be changed after the device is formatted. This section covers the options available to you for decisions that must be made before you format your storage device. Size Create an appropriately-sized file system for your workload. Smaller file systems have proportionally shorter backup times and require less time and memory for file system checks. However, if your file system is too small, its performance will suffer from high fragmentation. Block size The block is the unit of work for the file system. The block size determines how much data can be stored in a single block, and therefore the smallest amount of data that is written or read at one time. The default block size is appropriate for most use cases. However, your file system will perform better and store data more efficiently if the block size (or the size of multiple blocks) is the same as or slightly larger than amount of data that is typically read or written at one time. A small file will still use an entire block. Files can be spread across multiple blocks, but this can create additional runtime overhead. Additionally, some file systems are limited to a certain number of blocks, which in turn limits the maximum size of the file system. Block size is specified as part of the file system options when formatting a device with the mkfs command. The parameter that specifies the block size varies with the file system; see the mkfs man page for your file system for details. For example, to see the options available when formatting an XFS file system, execute the following command. Geometry File system geometry is concerned with the distribution of data across a file system. If your system uses striped storage, like RAID, you can improve performance by aligning data and metadata with the underlying storage geometry when you format the device. Many devices export recommended geometry, which is then set automatically when the devices are formatted with a particular file system. If your device does not export these recommendations, or you want to change the recommended settings, you must specify geometry manually when you format the device with mkfs . The parameters that specify file system geometry vary with the file system; see the mkfs man page for your file system for details. For example, to see the options available when formatting an ext4 file system, execute the following command. External journals Journaling file systems document the changes that will be made during a write operation in a journal file prior to the operation being executed. This reduces the likelihood that a storage device will become corrupted in the event of a system crash or power failure, and speeds up the recovery process. Metadata-intensive workloads involve very frequent updates to the journal. A larger journal uses more memory, but reduces the frequency of write operations. Additionally, you can improve the seek time of a device with a metadata-intensive workload by placing its journal on dedicated storage that is as fast as, or faster than, the primary storage. Warning Ensure that external journals are reliable. Losing an external journal device will cause file system corruption. External journals must be created at format time, with journal devices being specified at mount time. For details, see the mkfs and mount man pages. 8.1.3.2. Considerations at Mount Time This section covers tuning decisions that apply to most file systems and can be specified as the device is mounted. Barriers File system barriers ensure that file system metadata is correctly written and ordered on persistent storage, and that data transmitted with fsync persists across a power outage. On versions of Red Hat Enterprise Linux, enabling file system barriers could significantly slow applications that relied heavily on fsync , or created and deleted many small files. In Red Hat Enterprise Linux 7, file system barrier performance has been improved such that the performance effects of disabling file system barriers are negligible (less than 3%). For further information, see the Red Hat Enterprise Linux 7 Storage Administration Guide . Access Time Every time a file is read, its metadata is updated with the time at which access occurred ( atime ). This involves additional write I/O. In most cases, this overhead is minimal, as by default Red Hat Enterprise Linux 7 updates the atime field only when the access time was older than the times of last modification ( mtime ) or status change ( ctime ). However, if updating this metadata is time consuming, and if accurate access time data is not required, you can mount the file system with the noatime mount option. This disables updates to metadata when a file is read. It also enables nodiratime behavior, which disables updates to metadata when a directory is read. Read-ahead Read-ahead behavior speeds up file access by pre-fetching data that is likely to be needed soon and loading it into the page cache, where it can be retrieved more quickly than if it were on disk. The higher the read-ahead value, the further ahead the system pre-fetches data. Red Hat Enterprise Linux attempts to set an appropriate read-ahead value based on what it detects about your file system. However, accurate detection is not always possible. For example, if a storage array presents itself to the system as a single LUN, the system detects the single LUN, and does not set the appropriate read-ahead value for an array. Workloads that involve heavy streaming of sequential I/O often benefit from high read-ahead values. The storage-related tuned profiles provided with Red Hat Enterprise Linux 7 raise the read-ahead value, as does using LVM striping, but these adjustments are not always sufficient for all workloads. The parameters that define read-ahead behavior vary with the file system; see the mount man page for details. 8.1.3.3. Maintenance Regularly discarding blocks that are not in use by the file system is a recommended practice for both solid-state disks and thinly-provisioned storage. There are two methods of discarding unused blocks: batch discard and online discard. Batch discard This type of discard is part of the fstrim command. It discards all unused blocks in a file system that match criteria specified by the administrator. Red Hat Enterprise Linux 7 supports batch discard on XFS and ext4 formatted devices that support physical discard operations (that is, on HDD devices where the value of /sys/block/ devname /queue/discard_max_bytes is not zero, and SSD devices where the value of /sys/block/ devname /queue/discard_granularity is not 0 ). Online discard This type of discard operation is configured at mount time with the discard option, and runs in real time without user intervention. However, online discard only discards blocks that are transitioning from used to free. Red Hat Enterprise Linux 7 supports online discard on XFS and ext4 formatted devices. Red Hat recommends batch discard except where online discard is required to maintain performance, or where batch discard is not feasible for the system's workload. Pre-allocation Pre-allocation marks disk space as being allocated to a file without writing any data into that space. This can be useful in limiting data fragmentation and poor read performance. Red Hat Enterprise Linux 7 supports pre-allocating space on XFS, ext4, and GFS2 devices at mount time; see the mount man page for the appropriate parameter for your file system. Applications can also benefit from pre-allocating space by using the fallocate(2) glibc call.
[ "man mkfs.xfs", "man mkfs.ext4", "man mkfs", "man mount", "man mount" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/chap-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Storage_and_File_Systems
probe::ipmib.InDiscards
probe::ipmib.InDiscards Name probe::ipmib.InDiscards - Count discarded inbound packets Synopsis ipmib.InDiscards Values op value to be added to the counter (default value of 1) skb pointer to the struct sk_buff being acted on Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global InDiscards (equivalent to SNMP's MIB STATS_MIB_INDISCARDS)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ipmib-indiscards
3.3. Creating Guests with virt-manager
3.3. Creating Guests with virt-manager The Virtual Machine Manager, also known as virt-manager , is a graphical tool for creating and managing guest virtual machines. This section covers how to install a Red Hat Enterprise Linux 7 guest virtual machine on a Red Hat Enterprise Linux 7 host using virt-manager . These procedures assume that the KVM hypervisor and all other required packages are installed and the host is configured for virtualization. For more information on installing the virtualization packages, see Chapter 2, Installing the Virtualization Packages . 3.3.1. virt-manager installation overview The New VM wizard breaks down the virtual machine creation process into five steps: Choosing the hypervisor and installation type Locating and configuring the installation media Configuring memory and CPU options Configuring the virtual machine's storage Configuring virtual machine name, networking, architecture, and other hardware settings Ensure that virt-manager can access the installation media (whether locally or over the network) before you continue. 3.3.2. Creating a Red Hat Enterprise Linux 7 Guest with virt-manager This procedure covers creating a Red Hat Enterprise Linux 7 guest virtual machine with a locally stored installation DVD or DVD image. Red Hat Enterprise Linux 7 DVD images are available from the Red Hat Customer Portal . Note If you wish to install a virtual machine with SecureBoot enabled, see Creating a SecureBoot Red Hat Enterprise Linux 7 Guest with virt-manager . Procedure 3.1. Creating a Red Hat Enterprise Linux 7 guest virtual machine with virt-manager using local installation media Optional: Preparation Prepare the storage environment for the virtual machine. For more information on preparing storage, see Chapter 13, Managing Storage for Virtual Machines . Important Various storage types may be used for storing guest virtual machines. However, for a virtual machine to be able to use migration features, the virtual machine must be created on networked storage. Red Hat Enterprise Linux 7 requires at least 1 GB of storage space. However, Red Hat recommends at least 5 GB of storage space for a Red Hat Enterprise Linux 7 installation and for the procedures in this guide. Open virt-manager and start the wizard Open virt-manager by executing the virt-manager command as root or opening Applications System Tools Virtual Machine Manager . Figure 3.1. The Virtual Machine Manager window Optionally, open a remote hypervisor by selecting the hypervisor and clicking the Connect button. Click to start the new virtualized guest wizard. The New VM window opens. Specify installation type Select the installation type: Local install media (ISO image or CDROM) This method uses an image of an installation disk (for example, .iso ). However, using a host CD-ROM or a DVD-ROM device is not possible . Network Install (HTTP, FTP, or NFS) This method involves the use of a mirrored Red Hat Enterprise Linux or Fedora installation tree to install a guest. The installation tree must be accessible through either HTTP, FTP, or NFS. If you select Network Install , provide the installation URL and also Kernel options, if required. Network Boot (PXE) This method uses a Preboot eXecution Environment (PXE) server to install the guest virtual machine. Setting up a PXE server is covered in the Red Hat Enterprise Linux 7 Installation Guide . To install using network boot, the guest must have a routable IP address or shared network device. If you select Network Boot , continue to STEP 5. After all steps are completed, a DHCP request is sent and if a valid PXE server is found the guest virtual machine's installation processes will start. Import existing disk image This method allows you to create a new guest virtual machine and import a disk image (containing a pre-installed, bootable operating system) to it. Figure 3.2. Virtual machine installation method Click Forward to continue. Select the installation source If you selected Local install media (ISO image or CDROM) , specify your intended local installation media. Figure 3.3. Local ISO image installation Warning Even though the option is currently present in the GUI, installing from a physical CD-ROM or DVD device on the host is not possible. Therefore, selecting the Use CDROM or DVD option will cause the VM installation to fail. For details, see the Red Hat Knowledge Base . To install from an ISO image, select Use ISO image and click the Browse... button to open the Locate media volume window. Select the installation image you wish to use, and click Choose Volume . If no images are displayed in the Locate media volume window, click the Browse Local button to browse the host machine for the installation image or DVD drive containing the installation disk. Select the installation image or DVD drive containing the installation disk and click Open ; the volume is selected for use and you are returned to the Create a new virtual machine wizard. Important For ISO image files and guest storage images, the recommended location to use is /var/lib/libvirt/images/ . Any other location may require additional configuration by SELinux. See the Red Hat Enterprise Linux Virtualization Security Guide or the Red Hat Enterprise Linux SELinux User's and Administrator's Guide for more details on configuring SELinux. If you selected Network Install , input the URL of the installation source and also the required Kernel options, if any. The URL must point to the root directory of an installation tree, which must be accessible through either HTTP, FTP, or NFS. To perform a kickstart installation, specify the URL of a kickstart file in Kernel options, starting with ks= Figure 3.4. Network kickstart installation Note For a complete list of kernel options, see the Red Hat Enterprise Linux 7 Installation Guide . , configure the OS type and Version of the installation. Ensure that you select the appropriate operating system type for your virtual machine. This can be specified manually or by selecting the Automatically detect operating system based on install media check box. Click Forward to continue. Configure memory (RAM) and virtual CPUs Specify the number of CPUs and amount of memory (RAM) to allocate to the virtual machine. The wizard shows the number of CPUs and amount of memory you can allocate; these values affect the host's and guest's performance. Virtual machines require sufficient physical memory (RAM) to run efficiently and effectively. Red Hat supports a minimum of 512MB of RAM for a virtual machine. Red Hat recommends at least 1024MB of RAM for each logical core. Assign sufficient virtual CPUs for the virtual machine. If the virtual machine runs a multi-threaded application, assign the number of virtual CPUs the guest virtual machine will require to run efficiently. You cannot assign more virtual CPUs than there are physical processors (or hyper-threads) available on the host system. The number of virtual CPUs available is noted in the Up to X available field. Figure 3.5. Configuring Memory and CPU After you have configured the memory and CPU settings, click Forward to continue. Note Memory and virtual CPUs can be overcommitted. For more information on overcommitting, see Chapter 7, Overcommitting with KVM . Configure storage Enable and assign sufficient space for your virtual machine and any applications it requires. Assign at least 5 GB for a desktop installation or at least 1 GB for a minimal installation. Figure 3.6. Configuring virtual storage Note Live and offline migrations require virtual machines to be installed on shared network storage. For information on setting up shared storage for virtual machines, see Section 15.4, "Shared Storage Example: NFS for a Simple Migration" . With the default local storage Select the Create a disk image on the computer's hard drive radio button to create a file-based image in the default storage pool, the /var/lib/libvirt/images/ directory. Enter the size of the disk image to be created. If the Allocate entire disk now check box is selected, a disk image of the size specified will be created immediately. If not, the disk image will grow as it becomes filled. Note Although the storage pool is a virtual container it is limited by two factors: maximum size allowed to it by qemu-kvm and the size of the disk on the host physical machine. Storage pools may not exceed the size of the disk on the host physical machine. The maximum sizes are as follows: virtio-blk = 2^63 bytes or 8 Exabytes(using raw files or disk) Ext4 = ~ 16 TB (using 4 KB block size) XFS = ~8 Exabytes qcow2 and host file systems keep their own metadata and scalability should be evaluated/tuned when trying very large image sizes. Using raw disks means fewer layers that could affect scalability or max size. Click Forward to create a disk image on the local hard drive. Alternatively, select Select managed or other existing storage , then select Browse to configure managed storage. With a storage pool If you select Select managed or other existing storage to use a storage pool, click Browse to open the Locate or create storage volume window. Figure 3.7. The Choose Storage Volume window Select a storage pool from the Storage Pools list. Optional: Click to create a new storage volume. The Add a Storage Volume screen will appear. Enter the name of the new storage volume. Choose a format option from the Format drop-down menu. Format options include raw, qcow2, and qed. Adjust other fields as needed. Note that the qcow2 version used here is version 3. To change the qcow version see Section 23.19.2, "Setting Target Elements" Figure 3.8. The Add a Storage Volume window Select the new volume and click Choose volume . , click Finish to return to the New VM wizard. Click Forward to continue. Name and final configuration Name the virtual machine. Virtual machine names can contain letters, numbers and the following characters: underscores ( _ ), periods ( . ), and hyphens ( - ). Virtual machine names must be unique for migration and cannot consist only of numbers. By default, the virtual machine will be created with network address translation (NAT) for a network called 'default' . To change the network selection, click Network selection and select a host device and source mode. Verify the settings of the virtual machine and click Finish when you are satisfied; this will create the virtual machine with specified networking settings, virtualization type, and architecture. Figure 3.9. Verifying the configuration Or, to further configure the virtual machine's hardware, check the Customize configuration before install check box to change the guest's storage or network devices, to use the paravirtualized (virtio) drivers or to add additional devices. This opens another wizard that will allow you to add, remove, and configure the virtual machine's hardware settings. Note Red Hat Enterprise Linux 4 or Red Hat Enterprise Linux 5 guest virtual machines cannot be installed using graphical mode. As such, you must select "Cirrus" instead of "QXL" as a video card. After configuring the virtual machine's hardware, click Apply . virt-manager will then create the virtual machine with your specified hardware settings. Warning When installing a Red Hat Enterprise Linux 7 guest virtual machine from a remote medium but without a configured TCP/IP connection, the installation fails. However, when installing a guest virtual machine of Red Hat Enterprise Linux 5 or 6 in such circumstances, the installer opens a "Configure TCP/IP" interface. For further information about this difference, see the related knowledgebase article . Click Finish to continue into the Red Hat Enterprise Linux installation sequence. For more information on installing Red Hat Enterprise Linux 7, see the Red Hat Enterprise Linux 7 Installation Guide . A Red Hat Enterprise Linux 7 guest virtual machine is now created from an ISO installation disk image.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-creating_guests_with_virt_manager
Chapter 1. Creating and managing activation keys
Chapter 1. Creating and managing activation keys An activation key is a preshared authentication token that enables authorized users to register and auto-configure systems. Running a registration command with an activation key and organization ID combination, instead of a username and password combination, increases security and facilitates automation. Administrative users in your organization can create and manage activation keys on the Red Hat Hybrid Cloud Console. When an authorized user enters a preconfigured activation key to register a system on the command line, all of the system-level settings configured on the activation key are automatically applied to the system during the registration process. You can also use an activation key in a Kickstart file to bulk-provision the registration of multiple Red Hat Enterprise Linux (RHEL) instances without exposing personal username and password values. Your organization's activation keys and organization ID are displayed on the Activation Keys page in the Hybrid Cloud Console. Each user's access to your organization's activation keys is managed through a role-based access control (RBAC) system for the Hybrid Cloud Console. For example: Only users with the RHC user role can view the activation keys on the Activation Keys page. Only users with the RHC administrator role can create, configure, edit, and delete activation keys. Only users with root privileges or their equivalent can enter an activation key and organization ID with a registration command to connect and automatically configure systems from the command line. Users in the Organization Administrator group for your organization use the RBAC system to assign roles to other users in your organization. An Organization Administrator has the RHC administrator role by default. If you have questions about your access permissions, ask an Organization Administrator in your organization. 1.1. Viewing an activation key With the RHC user role, you can view your organization's numeric identifier (organization ID) and available activation keys on the Activation Keys page in the Hybrid Cloud Console. You can also view additional details, such as the Workload setting and additional enabled repositories for each activation key in your organization. The Key Name column shows the unique name of the activation key. The Role column shows the role value for the system purpose attribute set on the key. A potential role value is Red Hat Enterprise Linux Server . The SLA column shows the service level agreement value for the system purpose attribute set on the key. A potential service level agreement value is Premium . The Usage column shows the usage value for the system purpose attribute that is set on the key. A potential usage value is Production . If the RHC administrator sets no system purpose attributes on the activation key, then the Role, SLA, and Usage columns show no values. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC user or RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To view the activation keys for your organization in the Hybrid Cloud Console, complete the following step: From the Hybrid Cloud Console home page, click Services > System Configuration > Activation Keys . Note Activation keys are listed alphabetically by default. 1.2. Creating an activation key With the RHC administrator role, you can use the Hybrid Cloud Console interface to create activation keys that root users in your organization can use as an authentication token. During the creation process, you can configure the activation key to set system-level features, such as system purpose attributes, to host systems. When an authorized user uses the preconfigured activation key to register a system to Red Hat, the selected attributes are automatically applied to the system during the registration process. The activation key creation wizard guides you through the following fields: Name The activation key name uniquely identifies it. You can use this name to locate the key in the table on the Activation Keys page or to specify the key in a CLI command or automation script. Workload The workload associates the appropriate selection of repositories to the activation key. You can edit these repositories on the activation key details page after the key is created. When creating an activation key, you can select either Latest release or Extended support for the workload. Latest release is selected by default. If your account has subscriptions that are eligible for extended update support (EUS), then you can select Extended support and then select an EUS product and version available to your account. If your account does not have any subscriptions that are eligible for EUS, then the Extended supported option is disabled. System purpose The subscriptions service uses system purpose values to filter and identify hosts. You can set values for the Role , Service Level Agreement (SLA) , and Usage attributes to ensure that subscriptions are accurately reported for the system. You can select the system purpose values that are available to your account from the drop-down menus. Review You can review your entries and selections for each field before creating the activation key. If you do not select a value for an optional field, then the default value is Not defined . Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To create an activation key in the Hybrid Cloud Console, complete the following steps: From the Hybrid Cloud Console home page, click Services > System Configuration > Activation Keys . From the Activation Keys page, click Create activation key . In the Name field, enter a unique name for the activation key. Click . Note If you enter a name that already exists for an activation key in your organization, then you will receive an error message and the key will not be created. Select a workload option to associate the appropriate selection of repositories with the activation key. Click . Optional: In the Role , Service Level Agreement (SLA) , and Usage fields, select the system purpose attribute value that you want to set on the activation key. Click . Note Only the system purpose attributes that are available to your organization's account are selectable. Review the information that you entered into each field. If the information is correct, click Create . To make changes to the activation key settings or to enable additional repositories, click View activation key . The activation key details page opens. Verification The new activation key is listed on the Activation Keys page of the Hybrid Cloud Console. 1.3. Enabling additional repositories on an activation key By default, your system has access to content repositories that contain software packages that you can use to set up and manage your system. However, enabling additional repositories gives your system access to features and capabilities beyond the default repositories. It is no longer necessary to use command line tools to manually enable your system to access additional content repositories or to use automation scripts after the system is registered. With the RHC administrator role, you can use the Red Hat Hybrid Cloud Console interface to add selected repositories to an exisiting activation key. When a root user uses a preconfigured activation key to register a system from the command line, any content repositories that have been added to the activation key are automatically enabled for system access during the registration process. Using an activation key to automate the repository enablement process allows you to configure multiple system settings in one place for simplified system management. You can also use activation keys to apply system settings to multiple instances of Red Hat Enterprise Linux (RHEL) for automated bulk provisioning. Users with the RHC user role can see all the repositories that are associated with each activation key, but only users with the RHC administrator role can perform management functions, such as adding or deleting repositories on an activation key. If you have questions about your access permissions, contact a user in your organization with the Organization Administrator role in the Hybrid Cloud Console role-based access control (RBAC) system. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To enable additional repositories on an activation key in the Hybrid Cloud Console, complete the following steps: From the Hybrid Cloud Console home page, click Services > System Configuration > Activation Keys . From the Activation Keys table, click the name of the activation key that you want to configure with additional repositories. Note Only users with the RHC user role can click an activation key to view its details. If you have questions about your access permissions, contact a user in your organization with the Organization Administrator RBAC role. From the Additional repositories table, click Add repositories . Note Only users with the RHC administrator role can add or delete enabled repositories on an activation key. If you do not have sufficient access permissions to complete this step, Add repositories is disabled. If you have questions about your access permissions, contact a user in your organization with the Organization Administrator RBAC role. Select each additional repository that you want to enable with the activation key. Click Save Changes . Result If a RHC administrator has enabled additional repositories on an activation key, then those repositories are listed in the Additional repositories table for the selected activation key. 1.4. Editing an activation key With the RHC administrator role, you can use the Hybrid Cloud Console interface to edit the activation keys on the Activation Keys page. Specifically, you can add, change, or remove the following configurations on an existing activation key: System purpose attributes Workload, such as the release version of your system Additional enabled repositories Note You cannot edit the name of the activation key after it has been created. 1.4.1. Editing system purpose settings on an activation key You can change the system purpose configuration on an existing activation key by selecting a different system purpose attribute value from the Role , Service Level Agreement (SLA) , or Usage drop-down list. Possible selections for each attribute include the following values: Role Red Hat Enterprise Linux Server Red Hat Enterprise Linux Workstation Red Hat Enterprise Linux Compute Node Not defined Service Level Agreement (SLA) Premium Standard Self-Support Not defined Usage Production Development/Test Not defined Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To edit the system purpose attributes on an activation key, complete the following steps: From the Hybrid Cloud Console home page, click Services > System Configuration > Activation Keys . From the Activation Keys table, click the name of the activation key that you want to edit. From the System Purpose section of the activation key details page, click Edit . Select the value from the Role, SLA, or Usage drop-down list that you want to set on the activation key. Click Save changes . 1.4.2. Editing workload settings on an activation key You can change the workload configuration on an existing activation key by selecting a different value from the Release version drop-down list. Possible selections for the workload include the following RHEL release versions: 8.1 8.2 8.4 8.6 8.8 9.0 9.2 Not defined Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To edit the workload setting on an activation key, complete the following steps: From the Hybrid Cloud Console home page, click Services > System Configuration > Activation Keys . From the Activation Keys table, click the name of the activation key that you want to edit. From the Workload section of the activation key details page, click Edit . Select the value from the Release version drop-down list that you want to set on the activation key. Click Save changes . 1.5. Deleting an activation key With the RHC administrator role, you can use the Hybrid Cloud Console interface to delete an activation key from the table on the Activation Keys page. You might want to delete an unwanted or compromised activation key for security or maintenance purposes. However, deleting an activation key that is referenced in an automation script will impact the ability of that automation to function. To avoid any negative impacts to your automated processes, complete one of the following actions: Remove the unwanted activation key from the automation script. Retire the automation script prior to deleting the key. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To delete an activation key in the Hybrid Cloud Console, complete the following steps: From the Hybrid Cloud Console home page, click Services > System Configuration > Activation Keys . From the Activation Keys page, locate the row containing the activation key that you want to delete. Click the Delete icon. In the Delete Activation Key window, review the information about deleting activation keys. If you want to continue with the deletion, click Delete .
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_activation_keys_on_the_hybrid_cloud_console/assembly-creating-managing-activation-keys
Chapter 6. Registration APIs
Chapter 6. Registration APIs 6.1. API Registration APIs 6.1.1. APIService [apiregistration.k8s.io/v1] Description APIService represents a server for a particular GroupVersion. Name must be "version.group". Type object 6.2. APIService [apiregistration.k8s.io/v1] Description APIService represents a server for a particular GroupVersion. Name must be "version.group". Type object 6.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object APIServiceSpec contains information for locating and communicating with a server. Only https is supported, though you are able to disable certificate verification. status object APIServiceStatus contains derived information about an API server 6.2.1.1. .spec Description APIServiceSpec contains information for locating and communicating with a server. Only https is supported, though you are able to disable certificate verification. Type object Required groupPriorityMinimum versionPriority Property Type Description caBundle string CABundle is a PEM encoded CA bundle which will be used to validate an API server's serving certificate. If unspecified, system trust roots on the apiserver are used. group string Group is the API group name this server hosts groupPriorityMinimum integer GroupPriorityMininum is the priority this group should have at least. Higher priority means that the group is preferred by clients over lower priority ones. Note that other versions of this group might specify even higher GroupPriorityMininum values such that the whole group gets a higher priority. The primary sort is based on GroupPriorityMinimum, ordered highest number to lowest (20 before 10). The secondary sort is based on the alphabetical comparison of the name of the object. (v1.bar before v1.foo) We'd recommend something like: *.k8s.io (except extensions) at 18000 and PaaSes (OpenShift, Deis) are recommended to be in the 2000s insecureSkipTLSVerify boolean InsecureSkipTLSVerify disables TLS certificate verification when communicating with this server. This is strongly discouraged. You should use the CABundle instead. service object ServiceReference holds a reference to Service.legacy.k8s.io version string Version is the API version this server hosts. For example, "v1" versionPriority integer VersionPriority controls the ordering of this API version inside of its group. Must be greater than zero. The primary sort is based on VersionPriority, ordered highest to lowest (20 before 10). Since it's inside of a group, the number can be small, probably in the 10s. In case of equal version priorities, the version string will be used to compute the order inside a group. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. 6.2.1.2. .spec.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Property Type Description name string Name is the name of the service namespace string Namespace is the namespace of the service port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 6.2.1.3. .status Description APIServiceStatus contains derived information about an API server Type object Property Type Description conditions array Current service state of apiService. conditions[] object APIServiceCondition describes the state of an APIService at a particular point 6.2.1.4. .status.conditions Description Current service state of apiService. Type array 6.2.1.5. .status.conditions[] Description APIServiceCondition describes the state of an APIService at a particular point Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, one-word, CamelCase reason for the condition's last transition. status string Status is the status of the condition. Can be True, False, Unknown. type string Type is the type of the condition. 6.2.2. API endpoints The following API endpoints are available: /apis/apiregistration.k8s.io/v1/apiservices DELETE : delete collection of APIService GET : list or watch objects of kind APIService POST : create an APIService /apis/apiregistration.k8s.io/v1/watch/apiservices GET : watch individual changes to a list of APIService. deprecated: use the 'watch' parameter with a list operation instead. /apis/apiregistration.k8s.io/v1/apiservices/{name} DELETE : delete an APIService GET : read the specified APIService PATCH : partially update the specified APIService PUT : replace the specified APIService /apis/apiregistration.k8s.io/v1/watch/apiservices/{name} GET : watch changes to an object of kind APIService. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apiregistration.k8s.io/v1/apiservices/{name}/status GET : read status of the specified APIService PATCH : partially update status of the specified APIService PUT : replace status of the specified APIService 6.2.2.1. /apis/apiregistration.k8s.io/v1/apiservices Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of APIService Table 6.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 6.3. Body parameters Parameter Type Description body DeleteOptions schema Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind APIService Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK APIServiceList schema 401 - Unauthorized Empty HTTP method POST Description create an APIService Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.8. Body parameters Parameter Type Description body APIService schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 202 - Accepted APIService schema 401 - Unauthorized Empty 6.2.2.2. /apis/apiregistration.k8s.io/v1/watch/apiservices Table 6.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of APIService. deprecated: use the 'watch' parameter with a list operation instead. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.2.3. /apis/apiregistration.k8s.io/v1/apiservices/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the APIService Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an APIService Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified APIService Table 6.17. HTTP responses HTTP code Reponse body 200 - OK APIService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified APIService Table 6.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.19. Body parameters Parameter Type Description body Patch schema Table 6.20. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified APIService Table 6.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.22. Body parameters Parameter Type Description body APIService schema Table 6.23. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty 6.2.2.4. /apis/apiregistration.k8s.io/v1/watch/apiservices/{name} Table 6.24. Global path parameters Parameter Type Description name string name of the APIService Table 6.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind APIService. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.2.5. /apis/apiregistration.k8s.io/v1/apiservices/{name}/status Table 6.27. Global path parameters Parameter Type Description name string name of the APIService Table 6.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified APIService Table 6.29. HTTP responses HTTP code Reponse body 200 - OK APIService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified APIService Table 6.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.31. Body parameters Parameter Type Description body Patch schema Table 6.32. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified APIService Table 6.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.34. Body parameters Parameter Type Description body APIService schema Table 6.35. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/registration-apis
Chapter 3. Installing Dev Spaces
Chapter 3. Installing Dev Spaces This section contains instructions to install Red Hat OpenShift Dev Spaces. You can deploy only one instance of OpenShift Dev Spaces per cluster. Section 3.1.2, "Installing Dev Spaces on OpenShift using CLI" Section 3.1.3, "Installing Dev Spaces on OpenShift using the web console" Section 3.1.4, "Installing Dev Spaces in a restricted environment" 3.1. Installing Dev Spaces in the cloud Deploy and run Red Hat OpenShift Dev Spaces in the cloud. Prerequisites A OpenShift cluster to deploy OpenShift Dev Spaces on. dsc : The command line tool for Red Hat OpenShift Dev Spaces. See: Section 2.2, "Installing the dsc management tool" . 3.1.1. Deploying OpenShift Dev Spaces in the cloud Follow the instructions below to start the OpenShift Dev Spaces Server in the cloud by using the dsc tool. Section 3.1.2, "Installing Dev Spaces on OpenShift using CLI" Section 3.1.3, "Installing Dev Spaces on OpenShift using the web console" Section 3.1.4, "Installing Dev Spaces in a restricted environment" https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.19/html-single/user_guide/index#installing-che-on-microsoft-azure Section 3.1.5, "Installing Dev Spaces on Amazon Elastic Kubernetes Service" 3.1.2. Installing Dev Spaces on OpenShift using CLI You can install OpenShift Dev Spaces on OpenShift. Prerequisites OpenShift Container Platform An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . dsc . See: Section 2.2, "Installing the dsc management tool" . Procedure Optional: If you previously deployed OpenShift Dev Spaces on this OpenShift cluster, ensure that the OpenShift Dev Spaces instance is removed: Create the OpenShift Dev Spaces instance: Verification steps Verify the OpenShift Dev Spaces instance status: Navigate to the OpenShift Dev Spaces cluster instance: Additional resources Section 3.3.1, "Permissions to install Dev Spaces on OpenShift using CLI" 3.1.3. Installing Dev Spaces on OpenShift using the web console If you have trouble installing OpenShift Dev Spaces on the command line , you can install it through the OpenShift web console. Prerequisites An OpenShift web console session by a cluster administrator. See Accessing the web console . An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . For a repeat installation on the same OpenShift cluster: you uninstalled the OpenShift Dev Spaces instance according to Chapter 9, Uninstalling Dev Spaces . Procedure In the Administrator view of the OpenShift web console, go to Operators OperatorHub and search for Red Hat OpenShift Dev Spaces . Install the Red Hat OpenShift Dev Spaces Operator. Tip See Installing from OperatorHub using the web console . Caution The Red Hat OpenShift Dev Spaces Operator depends on the Dev Workspace Operator. If you install the Red Hat OpenShift Dev Spaces Operator manually to a non-default namespace, ensure that the Dev Workspace Operator is also installed in the same namespace. This is required as the Operator Lifecycle Manager will attempt to install the Dev Workspace Operator as a dependency within the Red Hat OpenShift Dev Spaces Operator namespace, potentially resulting in two conflicting installations of the Dev Workspace Operator if the latter is installed in a different namespace. Caution If you want to onboard Web Terminal Operator on the cluster make sure to use the same installation namespace as Red Hat OpenShift Dev Spaces Operator since both depend on Dev Workspace Operator. Web Terminal Operator, Red Hat OpenShift Dev Spaces Operator, and Dev Workspace Operator must be installed in the same namespace. Create the openshift-devspaces project in OpenShift as follows: Go to Operators Installed Operators Red Hat OpenShift Dev Spaces instance Specification Create CheCluster YAML view . In the YAML view , replace namespace: openshift-operators with namespace: openshift-devspaces . Select Create . Tip See Creating applications from installed Operators . Verification In Red Hat OpenShift Dev Spaces instance Specification , go to devspaces , landing on the Details tab. Under Message , check that there is None , which means no errors. Under Red Hat OpenShift Dev Spaces URL , wait until the URL of the OpenShift Dev Spaces instance appears, and then open the URL to check the OpenShift Dev Spaces dashboard. In the Resources tab, view the resources for the OpenShift Dev Spaces deployment and their status. Additional resources Section 3.3.2, "Permissions to install Dev Spaces on OpenShift using web console" 3.1.4. Installing Dev Spaces in a restricted environment On an OpenShift cluster operating in a restricted network, public resources are not available. However, deploying OpenShift Dev Spaces and running workspaces requires the following public resources: Operator catalog Container images Sample projects To make these resources available, you can replace them with their copy in a registry accessible by the OpenShift cluster. Prerequisites The OpenShift cluster has at least 64 GB of disk space. The OpenShift cluster is ready to operate on a restricted network. See About disconnected installation mirroring and Using Operator Lifecycle Manager on restricted networks . An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . An active oc registry session to the registry.redhat.io Red Hat Ecosystem Catalog. See: Red Hat Container Registry authentication . opm . See Installing the opm CLI . jq . See Downloading jq . podman . See Podman Installation Instructions . skopeo version 1.6 or higher. See Installing Skopeo . An active skopeo session with administrative access to the private Docker registry. Authenticating to a registry , and Mirroring images for a disconnected installation . dsc for OpenShift Dev Spaces version 3.19. See Section 2.2, "Installing the dsc management tool" . Procedure Download and execute the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh . 1 The private Docker registry where the images will be mirrored Install OpenShift Dev Spaces with the configuration set in the che-operator-cr-patch.yaml during the step: Allow incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user projects. See: Section 4.8.1, "Configuring network policies" . Additional resources Red Hat-provided Operator catalogs Managing custom catalogs 3.1.4.1. Setting up an Ansible sample Follow these steps to use an Ansible sample in restricted environments. Prerequisites Microsoft Visual Studio Code - Open Source IDE A 64-bit x86 system. Procedure Mirror the following images: Configure the cluster proxy to allow access to the following domains: Note Support for the following IDE and CPU architectures is planned for a future release: IDE JetBrains IntelliJ IDEA Community Edition IDE ( Technology Preview ) CPU architectures IBM Power (ppc64le) IBM Z (s390x) 3.1.5. Installing Dev Spaces on Amazon Elastic Kubernetes Service {eks} (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Follow the instructions below to install and enable OpenShift Dev Spaces on {eks-short}. Prerequisites helm : The package manager for Kubernetes. See: Installing Helm . dsc . See: Section 2.2, "Installing the dsc management tool" . aws : The AWS Command Line Interface. See: AWS CLI install and update instructions eksctl : The Command Line Interface for creating and managing Kubernetes clusters on {eks-short}. See: Installing eksctl 3.1.5.1. Configuring environment variables for {eks-short} Follow these instructions to define environment variables and update your kubeconfig to connect to {eks-short}. Prerequisites Amazon EKS cluster with storage addon. See: Create an Amazon EKS cluster Procedure Find the AWS account ID: Define the cluster name: Define the region: Update kubeconfig : Make sure that you have the default storage class set: The output should display a storage class with default to its name: Additional resources {eks} Store Kubernetes volumes with Amazon EBS Create a managed node group for {eks-short} Change the default storage class on Kubernetes cluster 3.1.5.2. Installing Ingress-Nginx Controller on {eks-short} Follow these instructions to install the Ingress-Nginx Controller on {eks-short}. Procedure Install the Ingress-Nginx Controller using Helm : Verify that you can access the load balancer externally. It may take a few minutes for the load balancer to be created: You should receive the output similar to: <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx</center> </body> </html> Additional resources Ingress-Nginx Controller Installation Guide 3.1.5.3. Configuring DNS on {eks-short} Follow these instructions to configure DNS on {eks-short}. Prerequisites A registered domain. See: Registering a new domain on {eks-short} . Procedure Define the registered domain name: Define domain name for Keycloak OIDC provider: Find out the hosted zone ID for the domain: Find out the Canonical Hosted Zone ID for the load balancer: Find out the DNS name for the load balancer: Create a DNS record set: Verify that you can access OpenShift Dev Spaces domain externally: Create a DNS record set: Verify that you can access the Keycloak domain externally: 3.1.5.4. Installing cert-manager on {eks-short} Follow these instructions to install the cert-manager on {eks-short}. Procedure Install cert-manager using Helm : Additional resources cert-manager Installation Guide 3.1.5.5. Creating Let's Encrypt certificate for OpenShift Dev Spaces on {eks-short} Follow these instructions to create a Let's Encrypt certificate for OpenShift Dev Spaces on {eks-short}. Procedure Create an IAM OIDC provider: Create a service principal: Create an IAM role and associate it with a Kubernetes Service Account: Grant permission for cert-manager to create Service Account tokens: Create the Issuer: 1 Replace <email_address> with your email address. Create the openshift-devspaces namespace: Create the Certificate: Wait for the che-tls secret to be created: Additional resources cert-manager Installation Guide 3.1.5.6. Installing Keycloak on {eks} Follow these instructions to install Keycloak as the OpenID Connect (OIDC) provider. Procedure Install Keycloak: Important Wait until the Keycloak pod is ready: Wait for the keycloak.tls secret to be created: Configure Keycloak to create the realm, client, and user: Additional resources Configuring Keycloak for production 3.1.5.7. Associate keycloak as OIDC identity provider on {eks-short} Follow these instructions to associate Keycloak an OIDC identity provider on {eks-short}. Procedure Associate Keycloak an identity provider using eksctl : eksctl associate identityprovider \ --wait \ --config-file - << EOF --- apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: USDCHE_EKS_CLUSTER_NAME region: USDCHE_EKS_CLUSTER_REGION identityProviders: - name: keycloak-oidc type: oidc issuerUrl: https://USDKEYCLOAK_DOMAIN_NAME/realms/che clientId: k8s-client usernameClaim: email EOF Additional resources Grant users access to Kubernetes with an external OIDC provider 3.1.5.8. Installing OpenShift Dev Spaces on {eks-short} Follow these instructions to install OpenShift Dev Spaces on {eks-short}. Procedure Prepare a CheCluster patch YAML file: cat > che-cluster-patch.yaml << EOF spec: networking: auth: oAuthClientName: k8s-client oAuthSecret: eclipse-che identityProviderURL: "https://USDKEYCLOAK_DOMAIN_NAME/realms/che" gateway: oAuthProxy: cookieExpireSeconds: 300 deployment: containers: - env: - name: OAUTH2_PROXY_BACKEND_LOGOUT_URL value: "http://USDKEYCLOAK_DOMAIN_NAME/realms/che/protocol/openid-connect/logout?id_token_hint={id_token}" name: oauth-proxy components: cheServer: extraProperties: CHE_OIDC_USERNAME__CLAIM: email EOF Deploy OpenShift Dev Spaces: Navigate to the OpenShift Dev Spaces cluster instance: 3.2. Finding the fully qualified domain name (FQDN) You can get the fully qualified domain name (FQDN) of your organization's instance of OpenShift Dev Spaces on the command line or in the OpenShift web console. Tip You can find the FQDN for your organization's OpenShift Dev Spaces instance in the Administrator view of the OpenShift web console as follows. Go to Operators Installed Operators Red Hat OpenShift Dev Spaces instance Specification devspaces Red Hat OpenShift Dev Spaces URL . Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . Procedure Run the following command: oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}' 3.3. Permissions to install Dev Spaces Learn about the permissions required to install Red Hat OpenShift Dev Spaces on different Kubernetes clusters. Section 3.3.1, "Permissions to install Dev Spaces on OpenShift using CLI" Section 3.3.2, "Permissions to install Dev Spaces on OpenShift using web console" 3.3.1. Permissions to install Dev Spaces on OpenShift using CLI Below is the minimal set of permissions required to install OpenShift Dev Spaces on an OpenShift cluster using dsc: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: devspaces-install-dsc rules: - apiGroups: ["org.eclipse.che"] resources: ["checlusters"] verbs: ["*"] - apiGroups: ["project.openshift.io"] resources: ["projects"] verbs: ["get", "list"] - apiGroups: [""] resources: ["namespaces"] verbs: ["get", "list", "create"] - apiGroups: [""] resources: ["pods", "configmaps"] verbs: ["get", "list"] - apiGroups: ["route.openshift.io"] resources: ["routes"] verbs: ["get", "list"] # OLM resources permissions - apiGroups: ["operators.coreos.com"] resources: ["catalogsources", "subscriptions"] verbs: ["create", "get", "list", "watch"] - apiGroups: ["operators.coreos.com"] resources: ["operatorgroups", "clusterserviceversions"] verbs: ["get", "list", "watch"] - apiGroups: ["operators.coreos.com"] resources: ["installplans"] verbs: ["patch", "get", "list", "watch"] - apiGroups: ["packages.operators.coreos.com"] resources: ["packagemanifests"] verbs: ["get", "list"] Additional resources oc apply command oc adm policy command 3.3.2. Permissions to install Dev Spaces on OpenShift using web console Below is the minimal set of permissions required to install OpenShift Dev Spaces on an OpenShift cluster using the web console: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: devspaces-install-web-console rules: - apiGroups: ["org.eclipse.che"] resources: ["checlusters"] verbs: ["*"] - apiGroups: [""] resources: ["namespaces"] verbs: ["get", "list", "create"] - apiGroups: ["project.openshift.io"] resources: ["projects"] verbs: ["get", "list", "create"] # OLM resources permissions - apiGroups: ["operators.coreos.com"] resources: ["subscriptions"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: ["operators.coreos.com"] resources: ["operatorgroups"] verbs: ["get", "list", "watch"] - apiGroups: ["operators.coreos.com"] resources: ["clusterserviceversions", "catalogsources", "installplans"] verbs: ["get", "list", "watch", "delete"] - apiGroups: ["packages.operators.coreos.com"] resources: ["packagemanifests", "packagemanifests/icon"] verbs: ["get", "list", "watch"] # Workaround related to viewing operators in OperatorHub - apiGroups: ["operator.openshift.io"] resources: ["cloudcredentials"] verbs: ["get", "list", "watch"] - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "authentications"] verbs: ["get", "list", "watch"] Additional resources oc apply command oc adm policy command
[ "dsc server:delete", "dsc server:deploy --platform openshift", "dsc server:status", "dsc dashboard:open", "create namespace openshift-devspaces", "bash prepare-restricted-environment.sh --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.17 --devworkspace_operator_version \"v0.32.0\" --prod_operator_index \"registry.redhat.io/redhat/redhat-operator-index:v4.17\" --prod_operator_package_name \"devspaces\" --prod_operator_bundle_name \"devspacesoperator\" --prod_operator_version \"v3.19.0\" --my_registry \" <my_registry> \" 1", "dsc server:deploy --platform=openshift --olm-channel stable --catalog-source-name=devspaces-disconnected-install --catalog-source-namespace=openshift-marketplace --skip-devworkspace-operator --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml", "ghcr.io/ansible/ansible-devspaces@sha256:a28fa23d254ff1b3ae10b95a0812132148f141bda4516661e40d0c49c4ace200 registry.access.redhat.com/ubi8/python-39@sha256:301fec66443f80c3cc507ccaf72319052db5a1dc56deb55c8f169011d4bbaacb", ".ansible.com .ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com", "AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text)", "CHE_EKS_CLUSTER_NAME=che", "CHE_EKS_CLUSTER_REGION=eu-central-1", "aws eks update-kubeconfig --region USDCHE_EKS_CLUSTER_REGION --name USDCHE_EKS_CLUSTER_NAME", "get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 126m", "helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx ingress-nginx/ingress-nginx --wait --create-namespace --namespace ingress-nginx --set controller.service.annotations.\"service\\.beta\\.kubernetes\\.io/aws-load-balancer-backend-protocol\"=tcp --set controller.service.annotations.\"service\\.beta\\.kubernetes\\.io/aws-load-balancer-cross-zone-load-balancing-enabled\"=\"true\" --set controller.service.annotations.\"service\\.beta\\.kubernetes\\.io/aws-load-balancer-type\"=nlb", "until curl USD(oc get service -n ingress-nginx ingress-nginx-controller -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}'); do sleep 5s; done", "<html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx</center> </body> </html>", "CHE_DOMAIN_NAME=eclipse-che-eks-clould.click", "KEYCLOAK_DOMAIN_NAME=keycloak.USDCHE_DOMAIN_NAME", "HOSTED_ZONE_ID=USD(aws route53 list-hosted-zones-by-name --dns-name USDCHE_DOMAIN_NAME --query \"HostedZones[0].Id\" --output text)", "CANONICAL_HOSTED_ZONE_ID=USD(aws elbv2 describe-load-balancers --query \"LoadBalancers[0].CanonicalHostedZoneId\" --output text)", "DNS_NAME=USD(oc get service -n ingress-nginx ingress-nginx-controller -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "aws route53 change-resource-record-sets --hosted-zone-id USDHOSTED_ZONE_ID --change-batch ' { \"Comment\": \"Ceating a record set\", \"Changes\": [{ \"Action\" : \"CREATE\", \"ResourceRecordSet\" : { \"Name\" : \"'\"USDCHE_DOMAIN_NAME\"'\", \"Type\" : \"A\", \"AliasTarget\" : { \"HostedZoneId\" : \"'\"USDCANONICAL_HOSTED_ZONE_ID\"'\", \"DNSName\" : \"'\"USDDNS_NAME\"'\", \"EvaluateTargetHealth\" : false } } }] } '", "until curl USDCHE_DOMAIN_NAME; do sleep 5s; done", "aws route53 change-resource-record-sets --hosted-zone-id USDHOSTED_ZONE_ID --change-batch ' { \"Comment\": \"Ceating a record set\", \"Changes\": [{ \"Action\" : \"CREATE\", \"ResourceRecordSet\" : { \"Name\" : \"'\"USDKEYCLOAK_DOMAIN_NAME\"'\", \"Type\" : \"A\", \"AliasTarget\" : { \"HostedZoneId\" : \"'\"USDCANONICAL_HOSTED_ZONE_ID\"'\", \"DNSName\" : \"'\"USDDNS_NAME\"'\", \"EvaluateTargetHealth\" : false } } }] } '", "until curl USDKEYCLOAK_DOMAIN_NAME; do sleep 5s; done", "helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager --wait --create-namespace --namespace cert-manager --set crds.enabled=true", "eksctl utils associate-iam-oidc-provider --cluster USDCHE_EKS_CLUSTER_NAME --approve", "aws iam create-policy --policy-name cert-manager-acme-dns01-route53 --description \"This policy allows cert-manager to manage ACME DNS01 records in Route53 hosted zones. See https://cert-manager.io/docs/configuration/acme/dns01/route53\" --policy-document file:///dev/stdin <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": \"route53:GetChange\", \"Resource\": \"arn:aws:route53:::change/*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"route53:ChangeResourceRecordSets\", \"route53:ListResourceRecordSets\" ], \"Resource\": \"arn:aws:route53:::hostedzone/*\" }, { \"Effect\": \"Allow\", \"Action\": \"route53:ListHostedZonesByName\", \"Resource\": \"*\" } ] } EOF", "eksctl create iamserviceaccount --name cert-manager-acme-dns01-route53 --namespace cert-manager --cluster USDCHE_EKS_CLUSTER_NAME --role-name cert-manager-acme-dns01-route53 --attach-policy-arn arn:aws:iam::USDAWS_ACCOUNT_ID:policy/cert-manager-acme-dns01-route53 --approve", "apply -f - << EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cert-manager-acme-dns01-route53-tokenrequest namespace: cert-manager rules: - apiGroups: [''] resources: ['serviceaccounts/token'] resourceNames: ['cert-manager-acme-dns01-route53'] verbs: ['create'] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cert-manager-acme-dns01-route53-tokenrequest namespace: cert-manager subjects: - kind: ServiceAccount name: cert-manager namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cert-manager-acme-dns01-route53-tokenrequest EOF", "apply -f - << EOF apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: devspaces-letsencrypt spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: <email_address> 1 privateKeySecretRef: name: devspaces-letsencrypt-production solvers: - dns01: route53: region: USDCHE_EKS_CLUSTER_REGION role: arn:aws:iam::USD{AWS_ACCOUNT_ID}:role/cert-manager-acme-dns01-route53 auth: kubernetes: serviceAccountRef: name: cert-manager-acme-dns01-route53 EOF", "create namespace openshift-devspaces", "apply -f - << EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: che-tls namespace: openshift-devspaces spec: secretName: che-tls issuerRef: name: devspaces-letsencrypt kind: ClusterIssuer commonName: 'USDCHE_DOMAIN_NAME' dnsNames: - 'USDCHE_DOMAIN_NAME' - '*.USDCHE_DOMAIN_NAME' usages: - server auth - digital signature - key encipherment - key agreement - data encipherment EOF", "until oc get secret -n openshift-devspaces che-tls; do sleep 5s; done", "While this guide provides a development configuration for deploying Keycloak on {kubernetes}, remember that production environments might require different settings, such as external database configuration.", "apply -f - <<EOF --- apiVersion: v1 kind: Namespace metadata: name: keycloak --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: secretName: keycloak.tls issuerRef: name: che-letsencrypt kind: ClusterIssuer commonName: 'USDKEYCLOAK_DOMAIN_NAME' dnsNames: - 'USDKEYCLOAK_DOMAIN_NAME' usages: - server auth - digital signature - key encipherment - key agreement - data encipherment --- apiVersion: v1 kind: Service metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: ports: - name: http port: 8080 targetPort: 8080 selector: app: keycloak type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:18.0.2 args: [\"start-dev\"] env: - name: KEYCLOAK_ADMIN value: \"admin\" - name: KEYCLOAK_ADMIN_PASSWORD value: \"admin\" - name: KC_PROXY value: \"edge\" ports: - name: http containerPort: 8080 readinessProbe: httpGet: path: /realms/master port: 8080 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak namespace: keycloak annotations: nginx.ingress.kubernetes.io/proxy-connect-timeout: '3600' nginx.ingress.kubernetes.io/proxy-read-timeout: '3600' nginx.ingress.kubernetes.io/ssl-redirect: 'true' spec: ingressClassName: nginx tls: - hosts: - USDKEYCLOAK_DOMAIN_NAME secretName: keycloak.tls rules: - host: USDKEYCLOAK_DOMAIN_NAME http: paths: - path: / pathType: Prefix backend: service: name: keycloak port: number: 8080 EOF", "wait --for=condition=ready pod -l app=keycloak -n keycloak --timeout=120s", "until oc get secret -n keycloak keycloak.tls; do sleep 5s; done", "exec deploy/keycloak -n keycloak -- bash -c \"/opt/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080 --realm master --user admin --password admin && /opt/keycloak/bin/kcadm.sh create realms -s realm='che' -s displayName='che' -s enabled=true -s registrationAllowed=false -s resetPasswordAllowed=true && /opt/keycloak/bin/kcadm.sh create clients -r 'che' -s clientId=k8s-client -s id=k8s-client -s redirectUris='[\\\"*\\\"]' -s directAccessGrantsEnabled=true -s secret=eclipse-che && /opt/keycloak/bin/kcadm.sh create users -r 'che' -s username=test -s email=\\\"[email protected]\\\" -s enabled=true -s emailVerified=true && /opt/keycloak/bin/kcadm.sh set-password -r 'che' --username test --new-password test\"", "eksctl associate identityprovider --wait --config-file - << EOF --- apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: USDCHE_EKS_CLUSTER_NAME region: USDCHE_EKS_CLUSTER_REGION identityProviders: - name: keycloak-oidc type: oidc issuerUrl: https://USDKEYCLOAK_DOMAIN_NAME/realms/che clientId: k8s-client usernameClaim: email EOF", "cat > che-cluster-patch.yaml << EOF spec: networking: auth: oAuthClientName: k8s-client oAuthSecret: eclipse-che identityProviderURL: \"https://USDKEYCLOAK_DOMAIN_NAME/realms/che\" gateway: oAuthProxy: cookieExpireSeconds: 300 deployment: containers: - env: - name: OAUTH2_PROXY_BACKEND_LOGOUT_URL value: \"http://USDKEYCLOAK_DOMAIN_NAME/realms/che/protocol/openid-connect/logout?id_token_hint={id_token}\" name: oauth-proxy components: cheServer: extraProperties: CHE_OIDC_USERNAME__CLAIM: email EOF", "dsc server:deploy --platform k8s --domain USDCHE_DOMAIN_NAME --che-operator-cr-patch-yaml che-cluster-patch.yaml --skip-cert-manager --k8spodreadytimeout 240000 --k8spoddownloadimagetimeout 240000", "dsc dashboard:open", "get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}'", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: devspaces-install-dsc rules: - apiGroups: [\"org.eclipse.che\"] resources: [\"checlusters\"] verbs: [\"*\"] - apiGroups: [\"project.openshift.io\"] resources: [\"projects\"] verbs: [\"get\", \"list\"] - apiGroups: [\"\"] resources: [\"namespaces\"] verbs: [\"get\", \"list\", \"create\"] - apiGroups: [\"\"] resources: [\"pods\", \"configmaps\"] verbs: [\"get\", \"list\"] - apiGroups: [\"route.openshift.io\"] resources: [\"routes\"] verbs: [\"get\", \"list\"] # OLM resources permissions - apiGroups: [\"operators.coreos.com\"] resources: [\"catalogsources\", \"subscriptions\"] verbs: [\"create\", \"get\", \"list\", \"watch\"] - apiGroups: [\"operators.coreos.com\"] resources: [\"operatorgroups\", \"clusterserviceversions\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"operators.coreos.com\"] resources: [\"installplans\"] verbs: [\"patch\", \"get\", \"list\", \"watch\"] - apiGroups: [\"packages.operators.coreos.com\"] resources: [\"packagemanifests\"] verbs: [\"get\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: devspaces-install-web-console rules: - apiGroups: [\"org.eclipse.che\"] resources: [\"checlusters\"] verbs: [\"*\"] - apiGroups: [\"\"] resources: [\"namespaces\"] verbs: [\"get\", \"list\", \"create\"] - apiGroups: [\"project.openshift.io\"] resources: [\"projects\"] verbs: [\"get\", \"list\", \"create\"] # OLM resources permissions - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\"] verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"operators.coreos.com\"] resources: [\"operatorgroups\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"operators.coreos.com\"] resources: [\"clusterserviceversions\", \"catalogsources\", \"installplans\"] verbs: [\"get\", \"list\", \"watch\", \"delete\"] - apiGroups: [\"packages.operators.coreos.com\"] resources: [\"packagemanifests\", \"packagemanifests/icon\"] verbs: [\"get\", \"list\", \"watch\"] # Workaround related to viewing operators in OperatorHub - apiGroups: [\"operator.openshift.io\"] resources: [\"cloudcredentials\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"authentications\"] verbs: [\"get\", \"list\", \"watch\"]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/administration_guide/installing-devspaces
Chapter 11. Installing a cluster into a shared VPC on GCP using Deployment Manager templates
Chapter 11. Installing a cluster into a shared VPC on GCP using Deployment Manager templates In OpenShift Container Platform version 4.17, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP) that uses infrastructure that you provide. In this context, a cluster installed into a shared VPC is a cluster that is configured to use a VPC from a project different from where the cluster is being deployed. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IPs from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation. The steps for performing a user-provided infrastructure installation into a shared VPC are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 11.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . Note Be sure to also review this site list if you are configuring a proxy. 11.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 11.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 11.4. Configuring the GCP project that hosts your cluster Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 11.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 11.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 11.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 11.2. Optional API services API service Console service name Cloud Deployment Manager V2 API deploymentmanager.googleapis.com Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 11.4.3. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 11.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 11.4.4. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 11.4.4.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Role Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using the Cloud Credential Operator in passthrough mode Compute Load Balancer Admin Tag User Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The following roles are applied to the service accounts that the control plane and compute machines use: Table 11.4. GCP service account roles Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin roles/artifactregistry.reader 11.4.5. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: africa-south1 (Johannesburg, South Africa) asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-central2 (Dammam, Saudi Arabia, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 11.4.6. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 11.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 11.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 11.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 11.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 11.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 11.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 11.1. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D N4 Tau T2D 11.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 11.6. Configuring the GCP project that hosts your shared VPC network If you use a shared Virtual Private Cloud (VPC) to host your OpenShift Container Platform cluster in Google Cloud Platform (GCP), you must configure the project that hosts it. Note If you already have a project that hosts the shared VPC network, review this section to ensure that the project meets all of the requirements to install an OpenShift Container Platform cluster. Procedure Create a project to host the shared VPC for your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Create a service account in the project that hosts your shared VPC. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. The service account for the project that hosts the shared VPC network requires the following roles: Compute Network User Compute Security Admin Deployment Manager Editor DNS Administrator Security Admin Network Management Admin 11.6.1. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the project that hosts the shared VPC that you install the cluster into. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 11.6.2. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Export the following variables required by the resource definition: Export the control plane CIDR: USD export MASTER_SUBNET_CIDR='10.0.0.0/17' Export the compute CIDR: USD export WORKER_SUBNET_CIDR='10.0.128.0/17' Export the region to deploy the VPC network and cluster to: USD export REGION='<region>' Export the variable for the ID of the project that hosts the shared VPC: USD export HOST_PROJECT=<host_project> Export the variable for the email of the service account that belongs to host project: USD export HOST_PROJECT_ACCOUNT=<host_service_account_email> Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the prefix of the network name. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1 1 For <vpc_deployment_name> , specify the name of the VPC to deploy. Export the VPC variable that other components require: Export the name of the host project network: USD export HOST_PROJECT_NETWORK=<vpc_network> Export the name of the host project control plane subnet: USD export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet> Export the name of the host project compute subnet: USD export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet> Set up the shared VPC. See Setting up Shared VPC in the GCP documentation. 11.6.2.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 11.2. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 11.7. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 11.7.1. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for GCP 11.7.2. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 11.7.3. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 11.7.4. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 5 - control-plane-tag1 - control-plane-tag2 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 8 - compute-tag1 - compute-tag2 replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: gcp: defaultMachinePlatform: tags: 10 - global-tag1 - global-tag2 projectID: openshift-production 11 region: us-central1 12 pullSecret: '{"auths": ...}' fips: false 13 sshKey: ssh-ed25519 AAAA... 14 publish: Internal 15 1 Specify the public DNS on the host project. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 8 10 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter applies to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 Specify the main project where the VM instances reside. 12 Specify the region that your VPC network is in. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 14 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 15 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . To use a shared VPC in a cluster that uses infrastructure that you provision, you must set publish to Internal . The installation program will no longer be able to access the public DNS zone for the base domain in the host project. 11.7.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 11.7.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Remove the privateZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {} 1 Remove this section completely. Configure the cloud provider for your VPC. Open the <installation_directory>/manifests/cloud-provider-config.yaml file. Add the network-project-id parameter and set its value to the ID of project that hosts the shared VPC network. Add the network-name parameter and set its value to the name of the shared VPC network that hosts the OpenShift Container Platform cluster. Replace the value of the subnetwork-name parameter with the value of the shared VPC subnet that hosts your compute machines. The contents of the <installation_directory>/manifests/cloud-provider-config.yaml resemble the following example: config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet If you deploy a cluster that is not on a private network, open the <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml file and replace the value of the scope parameter with External . The contents of the file resemble the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: '' To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 11.8. Exporting common variables 11.8.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 11.8.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' 1 USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 USD export NETWORK_CIDR='10.0.0.0/16' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` 1 2 Supply the values for the host project. 3 For <installation_directory> , specify the path to the directory that you stored the installation files in. 11.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 11.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 11.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 11.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 11.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 11.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 11.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 11.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 11.3. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 11.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 11.4. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 11.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 11.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 11.5. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 11.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 11.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 11.6. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 11.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Assign the permissions that the installation program requires to the service accounts for the subnets that host the control plane and compute subnets: Grant the networkViewer role of the project that hosts your shared VPC to the master service account: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkViewer" Grant the networkUser role to the master service account for the control plane subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the worker service account for the control plane subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the master service account for the compute subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the worker service account for the compute subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 11.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 11.7. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 11.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 11.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Ensure you installed pyOpenSSL. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 11.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 11.8. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 11.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , Creating IAM roles in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 11.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 11.9. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 11.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 11.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 11.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 11.10. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 11.19. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 11.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You installed the oc CLI. Ensure the bootstrap process completed successfully. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 11.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 11.22. Adding the ingress DNS records DNS zone configuration is removed when creating Kubernetes manifests and generating Ignition configs. You must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Ensure you defined the variables in the Exporting common variables section. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Ensure the bootstrap process completed successfully. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 11.23. Adding ingress firewall rules The cluster requires several firewall rules. If you do not use a shared VPC, these rules are created by the Ingress Controller via the GCP cloud provider. When you use a shared VPC, you can either create cluster-wide firewall rules for all services now or create each rule based on events, when the cluster requests access. By creating each rule when the cluster requests access, you know exactly which firewall rules are required. By creating cluster-wide firewall rules, you can apply the same rule set across multiple clusters. If you choose to create each rule based on events, you must create firewall rules after you provision the cluster and during the life of the cluster when the console notifies you that rules are missing. Events that are similar to the following event are displayed, and you must add the firewall rules that are required: USD oc get events -n openshift-ingress --field-selector="reason=LoadBalancerManualChange" Example output Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project` If you encounter issues when creating these rule-based events, you can configure the cluster-wide firewall rules while your cluster is running. 11.23.1. Creating cluster-wide firewall rules for a shared VPC in GCP You can create cluster-wide firewall rules to allow the access that the OpenShift Container Platform cluster requires. Warning If you do not choose to create firewall rules based on cluster events, you must create cluster-wide firewall rules. Prerequisites You exported the variables that the Deployment Manager templates require to deploy your cluster. You created the networking and load balancing components in GCP that your cluster requires. Procedure Add a single firewall rule to allow the Google Cloud Engine health checks to access all of the services. This rule enables the ingress load balancers to determine the health status of their instances. USD gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network="USD{CLUSTER_NETWORK}" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} Add a single firewall rule to allow access to all cluster services: For an external cluster: USD gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="USD{CLUSTER_NETWORK}" --source-ranges="0.0.0.0/0" --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} For a private cluster: USD gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="USD{CLUSTER_NETWORK}" --source-ranges=USD{NETWORK_CIDR} --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} Because this rule only allows traffic on TCP ports 80 and 443 , ensure that you add all the ports that your services use. 11.24. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Ensure the bootstrap process completed successfully. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 11.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 11.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "export MASTER_SUBNET_CIDR='10.0.0.0/17'", "export WORKER_SUBNET_CIDR='10.0.128.0/17'", "export REGION='<region>'", "export HOST_PROJECT=<host_project>", "export HOST_PROJECT_ACCOUNT=<host_service_account_email>", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1", "export HOST_PROJECT_NETWORK=<vpc_network>", "export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet>", "export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet>", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "mkdir <installation_directory>", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 5 - control-plane-tag1 - control-plane-tag2 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 8 - compute-tag1 - compute-tag2 replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: gcp: defaultMachinePlatform: tags: 10 - global-tag1 - global-tag2 projectID: openshift-production 11 region: us-central1 12 pullSecret: '{\"auths\": ...}' fips: false 13 sshKey: ssh-ed25519 AAAA... 14 publish: Internal 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {}", "config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: ''", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' 1 export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 export NETWORK_CIDR='10.0.0.0/16' export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkViewer\"", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "oc get events -n openshift-ingress --field-selector=\"reason=LoadBalancerManualChange\"", "Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description \"{\\\"kubernetes.io/service-name\\\":\\\"openshift-ingress/router-default\\\", \\\"kubernetes.io/service-ip\\\":\\\"35.237.236.234\\\"}\\\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`", "gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=\"0.0.0.0/0\" --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=USD{NETWORK_CIDR} --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/installing-gcp-user-infra-vpc
Chapter 2. Node [v1]
Chapter 2. Node [v1] Description Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NodeSpec describes the attributes that a node is created with. status object NodeStatus is information about the current status of a node. 2.1.1. .spec Description NodeSpec describes the attributes that a node is created with. Type object Property Type Description configSource object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 externalID string Deprecated. Not all kubelets will set this field. Remove field after 1.13. see: https://issues.k8s.io/61966 podCIDR string PodCIDR represents the pod IP range assigned to the node. podCIDRs array (string) podCIDRs represents the IP ranges assigned to the node for usage by Pods on that node. If this field is specified, the 0th entry must match the podCIDR field. It may contain at most 1 value for each of IPv4 and IPv6. providerID string ID of the node assigned by the cloud provider in the format: <ProviderName>://<ProviderSpecificNodeID> taints array If specified, the node's taints. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. unschedulable boolean Unschedulable controls node schedulability of new pods. By default, node is schedulable. More info: https://kubernetes.io/docs/concepts/nodes/node/#manual-node-administration 2.1.2. .spec.configSource Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.3. .spec.configSource.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.4. .spec.taints Description If specified, the node's taints. Type array 2.1.5. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required key effect Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Required. The taint key to be applied to a node. timeAdded Time TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 2.1.6. .status Description NodeStatus is information about the current status of a node. Type object Property Type Description addresses array List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See http://pr.k8s.io/79391 for an example. addresses[] object NodeAddress contains information for the node's address. allocatable object (Quantity) Allocatable represents the resources of a node that are available for scheduling. Defaults to Capacity. capacity object (Quantity) Capacity represents the total resources of a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity conditions array Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition conditions[] object NodeCondition contains condition information for a node. config object NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. daemonEndpoints object NodeDaemonEndpoints lists ports opened by daemons running on the Node. images array List of container images on this node images[] object Describe a container image nodeInfo object NodeSystemInfo is a set of ids/uuids to uniquely identify the node. phase string NodePhase is the recently observed lifecycle phase of the node. More info: https://kubernetes.io/docs/concepts/nodes/node/#phase The field is never populated, and now is deprecated. Possible enum values: - "Pending" means the node has been created/added by the system, but not configured. - "Running" means the node has been configured and has Kubernetes components running. - "Terminated" means the node has been removed from the cluster. volumesAttached array List of volumes that are attached to the node. volumesAttached[] object AttachedVolume describes a volume attached to a node volumesInUse array (string) List of attachable volumes in use (mounted) by the node. 2.1.7. .status.addresses Description List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See http://pr.k8s.io/79391 for an example. Type array 2.1.8. .status.addresses[] Description NodeAddress contains information for the node's address. Type object Required type address Property Type Description address string The node address. type string Node address type, one of Hostname, ExternalIP or InternalIP. 2.1.9. .status.conditions Description Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition Type array 2.1.10. .status.conditions[] Description NodeCondition contains condition information for a node. Type object Required type status Property Type Description lastHeartbeatTime Time Last time we got an update on a given condition. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of node condition. 2.1.11. .status.config Description NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. Type object Property Type Description active object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 assigned object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 error string Error describes any problems reconciling the Spec.ConfigSource to the Active config. Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting to load or validate the Assigned config, etc. Errors may occur at different points while syncing config. Earlier errors (e.g. download or checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error by fixing the config assigned in Spec.ConfigSource. You can find additional information for debugging by searching the error message in the Kubelet log. Error is a human-readable description of the error state; machines can check whether or not Error is empty, but should not rely on the stability of the Error text across Kubelet versions. lastKnownGood object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 2.1.12. .status.config.active Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.13. .status.config.active.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.14. .status.config.assigned Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.15. .status.config.assigned.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.16. .status.config.lastKnownGood Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.17. .status.config.lastKnownGood.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.18. .status.daemonEndpoints Description NodeDaemonEndpoints lists ports opened by daemons running on the Node. Type object Property Type Description kubeletEndpoint object DaemonEndpoint contains information about a single Daemon endpoint. 2.1.19. .status.daemonEndpoints.kubeletEndpoint Description DaemonEndpoint contains information about a single Daemon endpoint. Type object Required Port Property Type Description Port integer Port number of the given endpoint. 2.1.20. .status.images Description List of container images on this node Type array 2.1.21. .status.images[] Description Describe a container image Type object Property Type Description names array (string) Names by which this image is known. e.g. ["kubernetes.example/hyperkube:v1.0.7", "cloud-vendor.registry.example/cloud-vendor/hyperkube:v1.0.7"] sizeBytes integer The size of the image in bytes. 2.1.22. .status.nodeInfo Description NodeSystemInfo is a set of ids/uuids to uniquely identify the node. Type object Required machineID systemUUID bootID kernelVersion osImage containerRuntimeVersion kubeletVersion kubeProxyVersion operatingSystem architecture Property Type Description architecture string The Architecture reported by the node bootID string Boot ID reported by the node. containerRuntimeVersion string ContainerRuntime Version reported by the node through runtime remote API (e.g. containerd://1.4.2). kernelVersion string Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64). kubeProxyVersion string KubeProxy Version reported by the node. kubeletVersion string Kubelet Version reported by the node. machineID string MachineID reported by the node. For unique machine identification in the cluster this field is preferred. Learn more from man(5) machine-id: http://man7.org/linux/man-pages/man5/machine-id.5.html operatingSystem string The Operating System reported by the node osImage string OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)). systemUUID string SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid 2.1.23. .status.volumesAttached Description List of volumes that are attached to the node. Type array 2.1.24. .status.volumesAttached[] Description AttachedVolume describes a volume attached to a node Type object Required name devicePath Property Type Description devicePath string DevicePath represents the device path where the volume should be available name string Name of the attached volume 2.2. API endpoints The following API endpoints are available: /api/v1/nodes DELETE : delete collection of Node GET : list or watch objects of kind Node POST : create a Node /api/v1/watch/nodes GET : watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/nodes/{name} DELETE : delete a Node GET : read the specified Node PATCH : partially update the specified Node PUT : replace the specified Node /api/v1/watch/nodes/{name} GET : watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/nodes/{name}/status GET : read status of the specified Node PATCH : partially update status of the specified Node PUT : replace status of the specified Node 2.2.1. /api/v1/nodes Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Node Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Node Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK NodeList schema 401 - Unauthorized Empty HTTP method POST Description create a Node Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body Node schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 202 - Accepted Node schema 401 - Unauthorized Empty 2.2.2. /api/v1/watch/nodes Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /api/v1/nodes/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the Node Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Node Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Node Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Node Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Node Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body Node schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty 2.2.4. /api/v1/watch/nodes/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the Node Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /api/v1/nodes/{name}/status Table 2.27. Global path parameters Parameter Type Description name string name of the Node Table 2.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Node Table 2.29. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Node Table 2.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.31. Body parameters Parameter Type Description body Patch schema Table 2.32. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Node Table 2.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.34. Body parameters Parameter Type Description body Node schema Table 2.35. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/node_apis/node-v1
Chapter 1. Components Overview
Chapter 1. Components Overview This chapter provides a summary of all the components available for Apache Camel. 1.1. Container types Red Hat Fuse provides a variety of container types, into which you can deploy your Camel applications: Spring Boot Apache Karaf JBoss Enterprise Application Platform (JBoss EAP) In addition, a Camel application can run as containerless : that is, where a Camel application runs directly in the JVM, without any special container. In some cases, Fuse might support a Camel component in one container, but not in the others. There are various reasons for this, but in some cases a component is not suitable for all container types. For example, the camel-ejb component is designed specifically for Java EE (that is, JBoss EAP), and cannot be supported in the other container types. Note The camel-test component and extended components such as camel-test-blueprint , camel-test-karaf , and camel-test-spring are supported and can be used to run JUnit tests for each runtime. However, these components are not executed on the runtimes themselves but inside JUnit. 1.2. Supported components Note the following key: Symbol Description ✔ Supported ❌ Unsupported or not yet supported Deprecated Likely to be removed in a future release Table 1.1, "Apache Camel Component Support Matrix" provides comprehensive details about which Camel components are supported in which containers. Table 1.1. Apache Camel Component Support Matrix Component Containerless Spring Boot 2.x Karaf JBoss EAP IBM Power / Spring Boot 2 IBM Z / Spring Boot 2 activemq-camel ✔ ✔ ✔ ✔ ✔ ✔ activemq-http ✔ ❌ ❌ ✔ ❌ ❌ camel-ahc ✔ ✔ ✔ ✔ ❌ ❌ camel-ahc-ws ✔ ✔ ✔ ✔ ❌ ❌ camel-ahc-wss ✔ ✔ ✔ ✔ ❌ ❌ camel-amqp ✔ ✔ ✔ ✔ ✔ ✔ camel-apns ✔ ✔ ✔ ✔ ❌ ❌ camel-as2 ❌ ✔ ✔ ❌ ❌ ❌ camel-asterisk ✔ ✔ ✔ ✔ ❌ ❌ camel-atmos ✔ ✔ ❌ ❌ ❌ ❌ camel-atmosphere-websocket ✔ ✔ ✔ ✔ ❌ ❌ camel-atom ✔ ✔ ✔ ✔ ❌ ❌ camel-atomix ✔ ✔ ✔ ✔ ❌ ❌ camel-avro ✔ ✔ ✔ ✔ ✔ ✔ camel-aws ✔ ✔ ✔ ✔ ❌ ❌ camel-azure ✔ ✔ ✔ ✔ ❌ ❌ camel-bam Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-bean ✔ ✔ ✔ ✔ ❌ ❌ camel-bean-validator ✔ ✔ ✔ ✔ ❌ ❌ camel-beanstalk ✔ ✔ ✔ ❌ ❌ ❌ camel-binding Deprecated Deprecated Deprecated ✔ ❌ ❌ camel-blueprint ✔ ❌ ✔ ❌ ❌ ❌ camel-bonita ✔ ❌ ❌ ❌ ❌ ❌ camel-box ✔ ✔ ✔ ✔ ❌ ❌ camel-braintree ✔ ✔ ✔ ✔ ❌ ❌ camel-browse ✔ ✔ ✔ ✔ ❌ ❌ camel-cache Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-caffeine ✔ ✔ ✔ ✔ ❌ ❌ camel-cdi ✔ ❌ Deprecated ✔ ❌ ❌ camel-chronicle-engine ✔ ✔ ✔ ✔ ❌ ❌ camel-chunk ✔ ✔ ✔ ✔ ❌ ❌ camel-class ✔ ✔ ✔ ✔ ❌ ❌ camel-cm-sms ✔ ✔ ✔ ✔ ❌ ❌ camel-cmis ✔ ✔ ✔ ✔ ❌ ❌ camel-coap ✔ ✔ ✔ ✔ ❌ ❌ camel-cometd ✔ ✔ ✔ ✔ ❌ ❌ camel-context Deprecated ❌ ❌ ❌ ❌ ❌ camel-consul ✔ ✔ ✔ ✔ ❌ ❌ camel-controlbus ✔ ✔ ✔ ✔ ❌ ❌ camel-couchbase ✔ ✔ ✔ ✔ ❌ ❌ camel-couchdb ✔ ✔ ✔ ✔ ❌ ❌ camel-cql ✔ ✔ ✔ ✔ ❌ ❌ camel-crypto ✔ ✔ ✔ ✔ ❌ ❌ camel-crypto-cms ✔ ✔ ✔ ✔ ❌ ❌ camel-cxf ✔ ✔ ✔ ✔ ❌ ❌ camel-cxf-transport ✔ ✔ ✔ ✔ ❌ ❌ camel-dataformat ✔ ✔ ✔ ✔ ✔ ✔ camel-dataset ✔ ✔ ✔ ✔ ❌ ❌ camel-digitalocean ✔ ✔ ✔ ✔ ❌ ❌ camel-direct ✔ ✔ ✔ ✔ ❌ ❌ camel-direct-vm ✔ ✔ ✔ ✔ ❌ ❌ camel-disruptor ✔ ✔ ✔ ✔ ❌ ❌ camel-dns ✔ ✔ ✔ ✔ ❌ ❌ camel-docker ✔ ✔ ✔ ✔ ❌ ❌ camel-dozer ✔ ✔ ✔ ✔ ❌ ❌ camel-drill ✔ ✔ ✔ ❌ ❌ ❌ camel-dropbox ✔ ✔ ✔ ✔ ❌ ❌ camel-eclipse Deprecated ❌ ❌ ❌ ❌ ❌ camel-ehcache ✔ ✔ ✔ ✔ ❌ ❌ camel-ejb ✔ ❌ ❌ ✔ ❌ ❌ camel-elasticsearch ✔ ✔ ✔ ✔ ❌ ❌ camel-elasticsearch5 ✔ ✔ ✔ ✔ ❌ ❌ camel-elasticsearch-rest ✔ ✔ ✔ ✔ ❌ ❌ camel-elsql ✔ ✔ ✔ ✔ ❌ ❌ camel-etcd ✔ ✔ ✔ ✔ ❌ ❌ camel-eventadmin ✔ ❌ ✔ ❌ ❌ ❌ camel-exec ✔ ✔ ✔ ✔ ✔ ✔ camel-facebook ✔ ✔ ✔ ✔ ❌ ❌ camel-fhir ❌ ✔ ❌ ❌ Deprecated Deprecated camel-file ✔ ✔ ✔ ✔ ✔ ✔ camel-flatpack ✔ ✔ ✔ ✔ ❌ ❌ camel-flink ✔ ✔ ❌ ✔ ❌ ❌ camel-fop ✔ ✔ ✔ ✔ ❌ ❌ camel-freemarker ✔ ✔ ✔ ✔ ❌ ❌ camel-ftp ✔ ✔ ✔ ✔ ✔ ✔ camel-gae Deprecated ❌ ❌ ❌ ❌ ❌ camel-ganglia ✔ ✔ ✔ ✔ ❌ ❌ camel-geocoder ✔ ✔ ✔ ✔ ❌ ❌ camel-git ✔ ✔ ✔ ✔ ❌ ❌ camel-github ✔ ✔ ✔ ✔ ❌ ❌ camel-google-bigquery ✔ ✔ ❌ ✔ ❌ ❌ camel-google-calendar ✔ ✔ ✔ ✔ ❌ ❌ camel-google-drive ✔ ✔ ✔ ✔ ❌ ❌ camel-google-mail ✔ ✔ ✔ ✔ ❌ ❌ camel-google-pubsub ✔ ✔ ✔ ✔ ❌ ❌ camel-google-sheets ❌ ✔ ❌ ❌ ❌ ❌ camel-grape ✔ ✔ ✔ ❌ ❌ ❌ camel-groovy-dsl Deprecated ❌ ❌ ❌ ❌ ❌ camel-grpc ✔ ✔ ✔ ✔ ❌ ❌ camel-guava-eventbus ✔ ✔ ✔ ✔ ❌ ❌ camel-guice Deprecated Deprecated ❌ ❌ ❌ ❌ camel-hawtdb Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-hazelcast ✔ ✔ ✔ ✔ ❌ ❌ camel-hbase ✔ ✔ ❌ ❌ ❌ ❌ camel-hdfs Deprecated ❌ ❌ ❌ ❌ ❌ camel-hdfs2 ✔ ✔ ✔ ✔ ❌ ❌ camel-headersmap ✔ ✔ ✔ ✔ ❌ ❌ camel-hipchat ✔ ✔ ✔ ✔ ❌ ❌ camel-http Deprecated Deprecated ✔ ❌ ❌ ❌ camel-http4 ✔ ✔ ✔ ✔ ✔ ✔ camel-hystrix ✔ ✔ ✔ ✔ ❌ ❌ camel-ibatis Deprecated ❌ ❌ ❌ ❌ ❌ camel-iec60870 ✔ ✔ ✔ ✔ ❌ ❌ camel-ignite ❌ ❌ ❌ ❌ ❌ ❌ camel-imap ✔ ✔ ✔ ✔ ❌ ❌ camel-infinispan ✔ ✔ ✔ ✔ ✔ ✔ camel-influxdb ✔ ✔ ✔ ✔ ❌ ❌ camel-ipfs ❌ ✔ ❌ ✔ ❌ ❌ camel-irc ✔ ✔ ✔ ✔ ❌ ❌ camel-ironmq ❌ ❌ ❌ ❌ ❌ ❌ camel-jasypt ✔ ✔ ✔ ✔ ❌ ❌ camel-javaspace Deprecated ❌ ❌ ❌ ❌ ❌ camel-jbpm ❌ ❌ ❌ ❌ ❌ ❌ camel-jcache ✔ ✔ ✔ ✔ ❌ ❌ camel-jcifs ✔ ❌ ✔ ❌ ❌ ❌ camel-jclouds ✔ ❌ ✔ ✔ ❌ ❌ camel-jcr ✔ ✔ ✔ ✔ ❌ ❌ camel-jdbc ✔ ✔ ✔ ✔ ✔ ✔ camel-jetty Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-jetty8 ❌ ❌ ❌ ❌ ❌ ❌ camel-jetty9 ✔ ✔ ✔ ❌ ✔ ✔ camel-jgroups ✔ ✔ ✔ ✔ ❌ ❌ camel-jing ✔ ✔ ✔ ✔ ❌ ❌ camel-jira ✔ ✔ ❌ ❌ ❌ ❌ camel-jms ✔ ✔ ✔ ✔ ✔ ✔ camel-jmx ✔ ✔ ✔ ✔ ❌ ❌ camel-jolt ✔ ✔ ✔ ✔ ❌ ❌ camel-josql Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-jpa ✔ ✔ ✔ ✔ ✔ ✔ camel-jsch ✔ ✔ ✔ ✔ ❌ ❌ camel-json-validator ✔ ✔ ✔ ❌ ✔ ✔ camel-jt400 ✔ ✔ ✔ ✔ ❌ ❌ camel-juel Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-kafka ✔ ✔ ✔ ✔ ✔ ✔ camel-kestrel Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-krati Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-kubernetes ✔ ✔ ✔ ✔ ❌ ❌ camel-kura ✔ ❌ ✔ ❌ ❌ ❌ camel-ldap ✔ ✔ ✔ ✔ ❌ ❌ camel-ldif ✔ ✔ ✔ ❌ ❌ ❌ camel-leveldb ✔ ✔ ✔ ✔ ❌ ❌ camel-linkedin ❌ ❌ ❌ ❌ ❌ ❌ camel-log ✔ ✔ ✔ ✔ ❌ ❌ camel-lpr ✔ ✔ ✔ ✔ ❌ ❌ camel-lra ❌ ❌ ❌ ❌ ❌ ❌ camel-lucene ✔ ✔ ✔ ✔ ❌ ❌ camel-lumberjack ✔ ✔ ✔ ✔ ❌ ❌ camel-master ✔ ✔ ✔ ❌ ❌ ❌ camel-mail ✔ ✔ ✔ ✔ ❌ ❌ camel-metrics ✔ ✔ ✔ ✔ ❌ ❌ camel-micrometer ❌ ✔ ❌ ❌ ❌ ❌ camel-milo ✔ ✔ ✔ ✔ ❌ ❌ camel-mina Deprecated ❌ ✔ ❌ ❌ ❌ camel-mina2 ✔ ✔ ✔ ✔ ❌ ❌ camel-mllp ✔ ✔ ✔ ✔ ❌ ❌ camel-mock ✔ ✔ ✔ ✔ ❌ ❌ camel-mongodb ✔ ✔ ✔ ✔ ❌ ❌ camel-mongodb-gridfs ✔ ✔ ✔ ✔ ❌ ❌ camel-mongodb3 ✔ ✔ ✔ ✔ ❌ ❌ camel-mqtt Deprecated Deprecated Deprecated Deprecated ❌ ❌ camel-msv ✔ ✔ ✔ ✔ ❌ ❌ camel-mustache ✔ ✔ ✔ ✔ ❌ ❌ camel-mvel ✔ ✔ ✔ ✔ ❌ ❌ camel-mybatis ✔ ✔ ✔ ✔ ❌ ❌ camel-nagios ✔ ✔ ✔ ❌ ❌ ❌ camel-nats ✔ ✔ ✔ ✔ ❌ ❌ camel-netty Deprecated Deprecated ✔ ❌ ❌ ❌ camel-netty-http Deprecated Deprecated ✔ ❌ ❌ ❌ camel-netty4 ✔ ✔ ✔ ✔ ❌ ❌ camel-netty4-http ✔ ✔ ✔ ❌ ❌ ❌ camel-nsq ❌ ✔ ❌ ❌ ❌ ❌ camel-olingo2 ✔ ✔ ✔ ✔ ❌ ❌ camel-olingo4 ✔ ✔ ✔ ✔ ❌ ❌ camel-openapi-java ✔ ✔ ✔ ✔ ✔ ✔ camel-openshift Deprecated ❌ ✔ ❌ ❌ ❌ camel-openstack ✔ ✔ ✔ ✔ ❌ ❌ camel-opentracing ✔ ✔ ✔ ✔ ❌ ❌ camel-optaplanner ✔ ✔ ✔ ✔ ❌ ❌ camel-paho ✔ ✔ ✔ ✔ ❌ ❌ camel-paxlogging ✔ ❌ ✔ ❌ ❌ ❌ camel-pdf ✔ ✔ ✔ ✔ ❌ ❌ camel-pgevent ✔ ✔ ✔ ✔ ❌ ❌ camel-pop3 ✔ ✔ ✔ ✔ ❌ ❌ camel-printer ✔ ✔ ✔ ✔ ❌ ❌ camel-properties ✔ ✔ ✔ ✔ ❌ ❌ camel-pubnub ✔ ✔ ✔ ✔ ❌ ❌ camel-pulsar ✔ ✔ ✔ ❌ ❌ ❌ camel-quartz Deprecated ❌ ✔ ❌ ❌ ❌ camel-quartz2 ✔ ✔ ✔ ✔ ❌ ❌ camel-quickfix ✔ ✔ ✔ ✔ ✔ ✔ camel-rabbitmq ✔ ✔ ✔ ✔ ❌ ❌ camel-reactive-streams ✔ ✔ ✔ ✔ ❌ ❌ camel-reactor ✔ ✔ ✔ ✔ ❌ ❌ camel-ref ✔ ✔ ✔ ✔ ❌ ❌ camel-rest ✔ ✔ ✔ ✔ ✔ ✔ camel-rest-api ✔ ✔ ✔ ✔ ✔ ✔ camel-rest-openapi ✔ ✔ ✔ ✔ ✔ ✔ camel-rest-swagger ✔ ✔ ✔ ✔ ❌ ❌ camel-restlet ✔ ✔ ✔ ❌ ❌ ❌ camel-ribbon ✔ ✔ ❌ ✔ ❌ ❌ camel-rmi ✔ ✔ ✔ ✔ ❌ ❌ camel-routebox Deprecated ❌ ✔ ❌ ❌ ❌ camel-rss ✔ ✔ ✔ ✔ ❌ ❌ camel-rx Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-rxjava2 ❌ ✔ ❌ ❌ ❌ ❌ camel-saga ❌ ❌ ❌ ❌ ❌ ❌ camel-salesforce ✔ ✔ ✔ ✔ ❌ ❌ camel-sap ✔ ✔ ✔ ✔ ✔ ✔ camel-sap-netweaver ✔ ✔ ✔ ✔ ✔ ✔ camel-saxon ✔ ✔ ✔ ✔ ❌ ❌ camel-scala Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-scheduler ✔ ✔ ✔ ✔ ❌ ❌ camel-schematron ✔ ✔ ✔ ✔ ❌ ❌ camel-scp ✔ ✔ ✔ ✔ ❌ ❌ camel-scr Deprecated ✔ Deprecated ❌ ❌ ❌ camel-script Deprecated Deprecated Deprecated Deprecated ❌ ❌ camel-seda ✔ ✔ ✔ ✔ ❌ ❌ camel-service ❌ ✔ ❌ ❌ ❌ ❌ camel-servicenow ✔ ✔ ✔ ✔ ❌ ❌ camel-servlet ✔ ✔ ✔ ✔ ❌ ❌ camel-servletlistener Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-sftp ✔ ✔ ✔ ✔ ✔ ✔ camel-shiro ✔ ✔ ✔ ✔ ❌ ❌ camel-sip ✔ ✔ ✔ ✔ ❌ ❌ camel-sjms ✔ ✔ ✔ ✔ ❌ ❌ camel-sjms2 ✔ ✔ ✔ ✔ ❌ ❌ camel-slack ✔ ✔ ✔ ✔ ❌ ❌ camel-smpp ✔ ✔ ✔ ✔ ❌ ❌ camel-snakeyaml ✔ ✔ ✔ ✔ ❌ ❌ camel-snmp ✔ ✔ ✔ ✔ ❌ ❌ camel-solr ✔ ✔ ✔ ✔ ❌ ❌ camel-spark ✔ ✔ ❌ ❌ ❌ ❌ camel-spark-rest ✔ ✔ ❌ ❌ ❌ ❌ camel-splunk ✔ ✔ ✔ ✔ ❌ ❌ camel-spring ✔ ✔ ✔ ✔ ✔ ✔ camel-spring-batch ✔ ✔ ✔ ✔ ✔ ✔ camel-spring-boot ✔ ✔ ❌ ❌ ✔ ✔ camel-spring-cloud ✔ ✔ ❌ ❌ ✔ ✔ camel-spring-cloud-consul ❌ ✔ ❌ ❌ ❌ ❌ camel-spring-cloud-netflix ✔ ✔ ❌ ❌ ❌ ❌ camel-spring-cloud-zookeeper ❌ ✔ ❌ ❌ ❌ ❌ camel-spring-event ✔ ✔ ❌ ✔ ✔ ✔ camel-spring-integration ✔ ❌ ❌ ✔ ❌ ❌ camel-spring-javaconfig ✔ ✔ ❌ ✔ ❌ ❌ camel-spring-ldap ✔ ✔ ✔ ✔ ❌ ❌ camel-spring-redis ✔ ✔ ❌ ✔ ❌ ❌ camel-spring-security ✔ ✔ ✔ ✔ ✔ ✔ camel-spring-ws ✔ ✔ ✔ ❌ ❌ ❌ camel-sql ✔ ✔ ✔ ✔ ✔ ✔ camel-sql-stored ✔ ✔ ✔ ✔ ✔ ✔ camel-ssh ✔ ✔ ✔ ✔ ✔ ✔ camel-stax ✔ ✔ ✔ ✔ ❌ ❌ camel-stomp ✔ ✔ ✔ ✔ ❌ ❌ camel-stream ✔ ✔ ✔ ✔ ❌ ❌ camel-string-template ✔ ✔ ✔ ✔ ❌ ❌ camel-stub ✔ ✔ ✔ ✔ ❌ ❌ camel-swagger Deprecated ❌ Deprecated ❌ ❌ ❌ camel-swagger-java ✔ ✔ ✔ ✔ ❌ ❌ camel-tagsoup ✔ ✔ ✔ ✔ ❌ ❌ camel-telegram ✔ ✔ ✔ ✔ ❌ ❌ camel-test ✔ ✔ ✔ ✔ ❌ ❌ camel-thrift ✔ ✔ ✔ ✔ ❌ ❌ camel-tika ✔ ✔ ✔ ✔ ❌ ❌ camel-timer ✔ ✔ ✔ ✔ ❌ ❌ camel-twilio ✔ ✔ ✔ ✔ ❌ ❌ camel-twitter ✔ ✔ ✔ ✔ ❌ ❌ camel-undertow ✔ ✔ ✔ ✔ ❌ ❌ camel-urlrewrite Deprecated Deprecated Deprecated ❌ ❌ ❌ camel-validator ✔ ✔ ✔ ✔ ❌ ❌ camel-velocity ✔ ✔ ✔ ✔ ❌ ❌ camel-vertx ✔ ✔ ✔ ✔ ❌ ❌ camel-vm ✔ ✔ ✔ ✔ ❌ ❌ camel-weather ✔ ✔ ✔ ✔ ❌ ❌ camel-web3j ❌ ✔ ❌ ❌ ❌ ❌ camel-websocket ✔ ✔ ✔ ❌ ❌ ❌ camel-weka ✔ ❌ ❌ ✔ ❌ ❌ camel-wordpress ✔ ✔ ✔ ✔ ❌ ❌ camel-xchange ✔ ✔ ✔ ✔ ❌ ❌ camel-xmlrpc ❌ ❌ ❌ ❌ ❌ ❌ camel-xmlsecurity ✔ ✔ ✔ ✔ ❌ ❌ camel-xmpp ✔ ✔ ✔ ✔ ❌ ❌ camel-xquery ✔ ✔ ✔ ✔ ❌ ❌ camel-xslt ✔ ✔ ✔ ✔ ❌ ❌ camel-yammer ✔ ✔ ✔ ✔ ❌ ❌ camel-yql ❌ ❌ ❌ ❌ ❌ ❌ camel-zendesk ✔ ✔ ❌ ✔ ❌ ❌ camel-zipkin ✔ ✔ ✔ ✔ ❌ ❌ camel-zookeeper ✔ ✔ ✔ ✔ ❌ ❌ camel-zookeeper-master ✔ ✔ ✔ ✔ ❌ ❌ Table 1.2. Apache Camel Data Format Support Matrix Component Containerless Spring Boot 2.x Karaf JBoss EAP camel-asn1 ✔ ✔ ✔ ✔ camel-avro ✔ ✔ ✔ ✔ camel-barcode ✔ ✔ ✔ ✔ camel-base64 ✔ ✔ ✔ ✔ camel-beanio ✔ ✔ ✔ ✔ camel-bindy ✔ ✔ ✔ ✔ camel-boon ✔ ✔ ✔ ✔ camel-castor Deprecated Deprecated Deprecated ✔ camel-crypto ✔ ✔ ✔ ✔ camel-csv ✔ ✔ ✔ ✔ camel-fhir ✔ ✔ ✔ ✔ camel-flatpack ✔ ✔ ✔ ✔ camel-gzip ✔ ✔ ✔ ✔ camel-hessian Deprecated Deprecated Deprecated Deprecated camel-hl7 ✔ ✔ ✔ ✔ camel-ical ✔ ✔ ✔ ✔ camel-jacksonxml ✔ ✔ ✔ ✔ camel-jaxb ✔ ✔ ✔ ✔ camel-jibx ✔ ✔ ✔ ✔ camel-json-fastjson ✔ ✔ ✔ ✔ camel-json-gson ✔ ✔ ✔ ✔ camel-json-jackson ✔ ✔ ✔ ✔ camel-json-johnzon ✔ ✔ ✔ ✔ camel-json-xstream ✔ ✔ ✔ ✔ camel-lzf ✔ ✔ ✔ ✔ camel-mime-multipart ✔ ✔ ✔ ✔ camel-pgp ✔ ✔ ✔ ✔ camel-protobuf ✔ ✔ ✔ ✔ camel-rss ✔ ✔ ✔ ✔ camel-serialization ✔ ✔ ✔ ✔ camel-soapjaxb ✔ ✔ ✔ ✔ camel-string ✔ ✔ ✔ ✔ camel-syslog ✔ ✔ ✔ ✔ camel-tarfile ✔ ✔ ✔ ✔ camel-thrift ✔ ✔ ✔ ✔ camel-univocity-csv ✔ ✔ ✔ ✔ camel-univocity-fixed ✔ ✔ ✔ ✔ camel-univocity-tsv ✔ ✔ ✔ ✔ camel-xmlbeans Deprecated Deprecated Deprecated ✔ camel-xmljson Deprecated Deprecated Deprecated Deprecated camel-xmlrpc ❌ ❌ ❌ ❌ camel-xstream ✔ ✔ ✔ ✔ camel-yaml-snakeyaml ✔ ✔ ✔ ✔ camel-zip ✔ ✔ ✔ ✔ camel-zipfile ✔ ✔ ✔ ✔ Table 1.3. Apache Camel Language Support Matrix Language Containerless Spring Boot 2.x Karaf JBoss EAP Bean method ✔ ✔ ✔ ✔ Constant ✔ ✔ ✔ ✔ EL Deprecated ❌ ❌ ❌ ExchangeProperty ✔ ✔ ✔ ✔ File ✔ ✔ ✔ ✔ Groovy ✔ ✔ ✔ ✔ Header ✔ ✔ ✔ ✔ JsonPath ✔ ✔ ✔ ✔ JXPath Deprecated ❌ ❌ ❌ MVEL ✔ ✔ ✔ ✔ OGNL ✔ ✔ ✔ ✔ PHP Deprecated Deprecated ❌ Deprecated Python Deprecated Deprecated ❌ Deprecated Ref ✔ ✔ ✔ ✔ Ruby Deprecated Deprecated ❌ Deprecated Simple ✔ ✔ ✔ ✔ SpEL ✔ ✔ ❌ ✔ Tokenize ✔ ✔ ✔ ✔ XML Tokenize ✔ ✔ ✔ ✔ XPath ✔ ✔ ✔ ✔ XQuery ✔ ✔ ✔ ✔
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/components-overvew
23.7. Setting the Hostname
23.7. Setting the Hostname Setup prompts you to supply a host name for this computer, either as a fully-qualified domain name (FQDN) in the format hostname . domainname or as a short host name in the format hostname . Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this machine, specify the short host name only. Note You may give your system any name provided that the full hostname is unique. The hostname may include letters, numbers and hyphens. Change the default setting localhost . localdomain to a unique hostname for each of your Linux instances. Figure 23.23. Setting the hostname 23.7.1. Editing Network Connections Note To change your network configuration after you have completed the installation, use the Network Administration Tool . Type the system-config-network command in a shell prompt to launch the Network Administration Tool . If you are not root, it prompts you for the root password to continue. The Network Administration Tool is now deprecated and will be replaced by NetworkManager during the lifetime of Red Hat Enterprise Linux 6. Usually, the network connection configured earlier in installation phase 1 does not need to be modified during the rest of the installation. You cannot add a new connection on System z because the network subchannels need to be grouped and set online beforehand, and this is currently only done in installation phase 1. To change the existing network connection, click the button Configure Network . The Network Connections dialog appears that allows you to configure network connections for the system, not all of which are relevant to System z. Figure 23.24. Network Connections All network connections on System z are listed in the Wired tab. By default this contains the connection configured earlier in installation phase 1 and is either eth0 (OSA, LCS), or hsi0 (HiperSockets). Note that on System z you cannot add a new connection here. To modify an existing connection, select a row in the list and click the Edit button. A dialog box appears with a set of tabs appropriate to wired connections, as described below. The most important tabs on System z are Wired and IPv4 Settings . When you have finished editing network settings, click Apply to save the new configuration. If you reconfigured a device that was already active during installation, you must restart the device to use the new configuration - refer to Section 9.7.1.6, "Restart a network device" . 23.7.1.1. Options common to all types of connection Certain configuration options are common to all connection types. Specify a name for the connection in the Connection name name field. Select Connect automatically to start the connection automatically when the system boots. When NetworkManager runs on an installed system, the Available to all users option controls whether a network configuration is available system-wide or not. During installation, ensure that Available to all users remains selected for any network interface that you configure. 23.7.1.2. The Wired tab Use the Wired tab to specify or change the media access control (MAC) address for the network adapter, and either set the maximum transmission unit (MTU, in bytes) that can pass through the interface. Figure 23.25. The Wired tab 23.7.1.3. The 802.1x Security tab Use the 802.1x Security tab to configure 802.1X port-based network access control (PNAC). Select Use 802.1X security for this connection to enable access control, then specify details of your network. The configuration options include: Authentication Choose one of the following methods of authentication: TLS for Transport Layer Security Tunneled TLS for Tunneled Transport Layer Security , otherwise known as TTLS, or EAP-TTLS Protected EAP (PEAP) for Protected Extensible Authentication Protocol Identity Provide the identity of this server. User certificate Browse to a personal X.509 certificate file encoded with Distinguished Encoding Rules (DER) or Privacy Enhanced Mail (PEM). CA certificate Browse to a X.509 certificate authority certificate file encoded with Distinguished Encoding Rules (DER) or Privacy Enhanced Mail (PEM). Private key Browse to a private key file encoded with Distinguished Encoding Rules (DER), Privacy Enhanced Mail (PEM), or the Personal Information Exchange Syntax Standard (PKCS#12). Private key password The password for the private key specified in the Private key field. Select Show password to make the password visible as you type it. Figure 23.26. The 802.1x Security tab 23.7.1.4. The IPv4 Settings tab Use the IPv4 Settings tab tab to configure the IPv4 parameters for the previously selected network connection. The address, netmask, gateway, DNS servers and DNS search suffix for an IPv4 connection were configured during installation phase 1 or reflect the following parameters in the parameter file or configuration file: IPADDR , NETMASK , GATEWAY , DNS , SEARCHDNS (Refer to Section 26.3, "Installation Network Parameters" ). Use the Method drop-down menu to specify which settings the system should attempt to obtain from a Dynamic Host Configuration Protocol (DHCP) service running on the network. Choose from the following options: Automatic (DHCP) IPv4 parameters are configured by the DHCP service on the network. Automatic (DHCP) addresses only The IPv4 address, netmask, and gateway address are configured by the DHCP service on the network, but DNS servers and search domains must be configured manually. Manual IPv4 parameters are configured manually for a static configuration. Link-Local Only A link-local address in the 169.254/16 range is assigned to the interface. Shared to other computers The system is configured to provide network access to other computers. The interface is assigned an address in the 10.42.x.1/24 range, a DHCP server and DNS server are started, and the interface is connected to the default network connection on the system with network address translation (NAT). Disabled IPv4 is disabled for this connection. If you selected a method that requires you to supply manual parameters, enter details of the IP address for this interface, the netmask, and the gateway in the Addresses field. Use the Add and Delete buttons to add or remove addresses. Enter a comma-separated list of DNS servers in the DNS servers field, and a comma-separated list of domains in the Search domains field for any domains that you want to include in name server lookups. Optionally, enter a name for this network connection in the DHCP client ID field. This name must be unique on the subnet. When you assign a meaningful DHCP client ID to a connection, it is easy to identify this connection when troubleshooting network problems. Deselect the Require IPv4 addressing for this connection to complete check box to allow the system to make this connection on an IPv6-enabled network if IPv4 configuration fails but IPv6 configuration succeeds. Figure 23.27. The IPv4 Settings tab 23.7.1.4.1. Editing IPv4 routes Red Hat Enterprise Linux configures a number of routes automatically based on the IP addresses of a device. To edit additional routes, click the Routes button. The Editing IPv4 routes dialog appears. Figure 23.28. The Editing IPv4 Routes dialog Click Add to add the IP address, netmask, gateway address, and metric for a new static route. Select Ignore automatically obtained routes to make the interface use only the routes specified for it here. Select Use this connection only for resources on its network to restrict connections only to the local network. 23.7.1.5. The IPv6 Settings tab Use the IPv6 Settings tab tab to configure the IPv6 parameters for the previously selected network connection. Use the Method drop-down menu to specify which settings the system should attempt to obtain from a Dynamic Host Configuration Protocol (DHCP) service running on the network. Choose from the following options: Ignore IPv6 is ignored for this connection. Automatic NetworkManager uses router advertisement (RA) to create an automatic, stateless configuration. Automatic, addresses only NetworkManager uses RA to create an automatic, stateless configuration, but DNS servers and search domains are ignored and must be configured manually. Automatic, DHCP only NetworkManager does not use RA, but requests information from DHCPv6 directly to create a stateful configuration. Manual IPv6 parameters are configured manually for a static configuration. Link-Local Only A link-local address with the fe80::/10 prefix is assigned to the interface. If you selected a method that requires you to supply manual parameters, enter details of the IP address for this interface, the netmask, and the gateway in the Addresses field. Use the Add and Delete buttons to add or remove addresses. Enter a comma-separated list of DNS servers in the DNS servers field, and a comma-separated list of domains in the Search domains field for any domains that you want to include in name server lookups. Optionally, enter a name for this network connection in the DHCP client ID field. This name must be unique on the subnet. When you assign a meaningful DHCP client ID to a connection, it is easy to identify this connection when troubleshooting network problems. Deselect the Require IPv6 addressing for this connection to complete check box to allow the system to make this connection on an IPv4-enabled network if IPv6 configuration fails but IPv4 configuration succeeds. Figure 23.29. The IPv6 Settings tab 23.7.1.5.1. Editing IPv6 routes Red Hat Enterprise Linux configures a number of routes automatically based on the IP addresses of a device. To edit additional routes, click the Routes button. The Editing IPv6 routes dialog appears. Figure 23.30. The Editing IPv6 Routes dialog Click Add to add the IP address, netmask, gateway address, and metric for a new static route. Select Use this connection only for resources on its network to restrict connections only to the local network. 23.7.1.6. Restart a network device If you reconfigured a network that was already in use during installation, you must disconnect and reconnect the device in anaconda for the changes to take effect. Anaconda uses interface configuration (ifcfg) files to communicate with NetworkManager . A device becomes disconnected when its ifcfg file is removed, and becomes reconnected when its ifcfg file is restored, as long as ONBOOT=yes is set. Refer to the Red Hat Enterprise Linux 6.9 Deployment Guide available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index.html for more information about interface configuration files. Press Ctrl + Alt + F2 to switch to virtual terminal tty2 . Move the interface configuration file to a temporary location: where device_name is the device that you just reconfigured. For example, ifcfg-eth0 is the ifcfg file for eth0 . The device is now disconnected in anaconda . Open the interface configuration file in the vi editor: Verify that the interface configuration file contains the line ONBOOT=yes . If the file does not already contain the line, add it now and save the file. Exit the vi editor. Move the interface configuration file back to the /etc/sysconfig/network-scripts/ directory: The device is now reconnected in anaconda . Press Ctrl + Alt + F6 to return to anaconda .
[ "mv /etc/sysconfig/network-scripts/ifcfg- device_name /tmp", "vi /tmp/ifcfg- device_name", "mv /tmp/ifcfg- device_name /etc/sysconfig/network-scripts/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-netconfig-s390
Chapter 2. Assessing and filtering your inventory
Chapter 2. Assessing and filtering your inventory Assessing and filtering your inventory will help you identify and eliminate security, operations, and business risks in your fleet. 2.1. Inventory Application Programming Interface (API) Red Hat Insights provides a set of APIs that you can use to interact with specific Insights for Red Hat Enterprise Linux applications, to obtain system details and recommendations. We have designed our APIs to ensure the security of your data. All Insights APIs are Representational State Transfer (REST) APIs. REST APIs are stateless. Statelessness means that servers do not save client data between requests. Our APIs also use token-based authentication, which provides granular control over access permissions and enhances security. Review the following resources to learn more about how you can use the inventory API to locate information, enact edits, and automate repetitive tasks: Additional Resources For more information about Red Hat Insights API, see the Red Hat Insights API reference guide: API Catalog For more information about getting started with Red Hat Insights API, see the Red Hat Insights API cheat sheet: Insights API cheat sheet For more information about the inventory API, see Managed Inventory: Managed Inventory API 2.2. Refining your view of systems in inventory There are several ways to refine your inventory view to help you focus on the issues and systems that matter the most. You can filter by Name , Status , Operating System , Data Collector , remote host configuration status , Last seen , Workspace , or Tags . Follow the procedure below to filter your systems: Prerequisites You have Inventory Hosts viewer access. Procedure Navigate to Red Hat Insights > RHEL > Inventory page. Click the Name filter drop-down. Choose an option from the drop-down menu, such as Name , Status , Operating System , Data Collector , RHC status , Last seen , Workspace or Tags . Select additional filters within your query. For example, if you chose the Operating System filter, click Filter by operating system in the header to choose a specific version of RHEL. Click the checkbox to the RHEL version you want to filter. Optional: To add multiple filters to your query, click an additional filter (such as Data Collector ). A second drop-down appears to the right of the Data Collector filter, called Filter by data collector . Choose the desired data collector. This first filter then appears just below the header. If desired, choose a second filter. You can apply all 8 available filters to your query. Click Reset filters to clear your query. Additional Resources For information about global filters, see the following: System filtering and groups 2.3. Deleting systems from inventory When a system is obsolete or decommissioned, you might choose to remove it from inventory. Use the following procedure to do so: Prerequisites You have Inventory Hosts administrator access. Procedure Navigate to Red Hat Insights > RHEL > Inventory page. Check the box to the left of the system(s) you want to remove. Click the Delete button to the right of the filter. A Delete from Inventory confirmation dialog box appears. Click Delete to confirm this action. A message box appears in the upper right corner of the screen, stating that the delete operation initiated. When the deletion is complete, a message box confirms that deletion was successful. Caution The selected system(s) will be removed from ALL console.redhat.com applications and services. Note A system might reappear in inventory if data collectors are uploading data from systems that are still registered and subscribed . Refer to the documentation for the specific data collector(s) to determine how to permanently unregister or unsubscribe. Additional resources Unregistering from Red Hat Subscription Management Services
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/viewing_and_managing_system_inventory/assembly-assessing-and-filtering
Chapter 16. structured
Chapter 16. structured Original log entry as a structured object. This field may be present if the forwarder was configured to parse structured JSON logs. If the original log entry was a valid structured log, this field will contain an equivalent JSON structure. Otherwise this field will be empty or absent, and the message field will contain the original log message. The structured field can have any subfields that are included in the log message, there are no restrictions defined here. Data type group Example value map[message:starting fluentd worker pid=21631 ppid=21618 worker=0 pid:21631 ppid:21618 worker:0]
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/structured
Using the Memcached protocol endpoint with Data Grid
Using the Memcached protocol endpoint with Data Grid Red Hat Data Grid 8.5 Use the Data Grid Memcached endpoint to interact with remote caches Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_the_memcached_protocol_endpoint_with_data_grid/index
Chapter 73. Kubernetes HPA
Chapter 73. Kubernetes HPA Since Camel 2.23 Both producer and consumer are supported The Kubernetes HPA component is one of the Kubernetes Components which provides a producer to execute kubernetes Horizontal Pod Autoscaler operations and a consumer to consume events related to Horizontal Pod Autoscaler objects. 73.1. Dependencies When using kubernetes-hpa with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 73.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 73.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 73.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 73.3. Component Options The Kubernetes HPA component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 73.4. Endpoint Options The Kubernetes HPA endpoint is configured using URI syntax: with the following path and query parameters: 73.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 73.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 73.5. Message Headers The Kubernetes HPA component supports 7 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesHPAName (producer) Constant: KUBERNETES_HPA_NAME The HPA name. String CamelKubernetesHPASpec (producer) Constant: KUBERNETES_HPA_SPEC The spec for a HPA. HorizontalPodAutoscalerSpec CamelKubernetesHPALabels (producer) Constant: KUBERNETES_HPA_LABELS The HPA labels. Map CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 73.6. Supported producer operation listHPA listHPAByLabels getHPA createHPA updateHPA deleteHPA 73.7. Kubernetes HPA Producer Examples listHPA: this operation lists the HPAs on a kubernetes cluster. from("direct:list"). toF("kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPA"). to("mock:result"); This operation returns a List of HPAs from your cluster. listDeploymentsByLabels: this operation lists the HPAs by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_HPA_LABELS, labels); } }); toF("kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPAByLabels"). to("mock:result"); This operation returns a List of HPAs from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 73.8. Kubernetes HPA Consumer Example fromF("kubernetes-hpa://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); HorizontalPodAutoscaler hpa = exchange.getIn().getBody(HorizontalPodAutoscaler.class); log.info("Got event with hpa name: " + hpa.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the hpa test. 73.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-hpa:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPA\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_HPA_LABELS, labels); } }); toF(\"kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPAByLabels\"). to(\"mock:result\");", "fromF(\"kubernetes-hpa://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); HorizontalPodAutoscaler hpa = exchange.getIn().getBody(HorizontalPodAutoscaler.class); log.info(\"Got event with hpa name: \" + hpa.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-hpa-component-starter
Integrating Amazon Web Services (AWS) data into cost management
Integrating Amazon Web Services (AWS) data into cost management Cost Management Service 1-latest Learn how to add AWS cloud integrations and configure RHEL metering. Red Hat Customer Content Services
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"s3:Get*\", \"s3:List*\" ], \"Resource\": [ \"arn:aws:s3:::<your_bucket_name>\", 1 \"arn:aws:s3:::<your_bucket_name>/*\" ] }, { \"Sid\": \"VisualEditor1\", \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"cur:DescribeReportDefinitions\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"s3:Get*\", \"s3:List*\" ], \"Resource\": [ \"arn:aws:s3:::<your_bucket_name>\", 1 \"arn:aws:s3:::<your_bucket_name>/*\" ] }, { \"Sid\": \"VisualEditor1\", \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"cur:DescribeReportDefinitions\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"athena:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"glue:CreateDatabase\", \"glue:DeleteDatabase\", \"glue:GetDatabase\", \"glue:GetDatabases\", \"glue:UpdateDatabase\", \"glue:CreateTable\", \"glue:DeleteTable\", \"glue:BatchDeleteTable\", \"glue:UpdateTable\", \"glue:GetTable\", \"glue:GetTables\", \"glue:BatchCreatePartition\", \"glue:CreatePartition\", \"glue:DeletePartition\", \"glue:BatchDeletePartition\", \"glue:UpdatePartition\", \"glue:GetPartition\", \"glue:GetPartitions\", \"glue:BatchGetPartition\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\", \"s3:ListBucketMultipartUploads\", \"s3:ListMultipartUploadParts\", \"s3:AbortMultipartUpload\", \"s3:CreateBucket\", \"s3:PutObject\", \"s3:PutBucketPublicAccessBlock\" ], \"Resource\": [ \"arn:aws:s3:::CHANGE-ME*\" 1 ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\": [ \"arn:aws:s3:::CHANGE-ME*\" 2 ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListAllMyBuckets\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"sns:ListTopics\", \"sns:GetTopicAttributes\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"cloudwatch:PutMetricAlarm\", \"cloudwatch:DescribeAlarms\", \"cloudwatch:DeleteAlarms\", \"cloudwatch:GetMetricData\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"lakeformation:GetDataAccess\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"logs:*\" ], \"Resource\": \"*\" } ] }", "{ \"Sid\": \"VisualEditor3\", \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\", \"secretsmanager:DescribeSecret\" ], \"Resource\": \"*\" }", "SELECT * FROM <your_export_name> WHERE ( bill_billing_entity = 'AWS Marketplace' AND line_item_legal_entity like '%Red Hat%' ) OR ( line_item_legal_entity like '%Amazon Web Services%' AND line_item_line_item_description like '%Red Hat%' ) OR ( line_item_legal_entity like '%Amazon Web Services%' AND line_item_line_item_description like '%RHEL%' ) OR ( line_item_legal_entity like '%AWS%' AND line_item_line_item_description like '%Red Hat%' ) OR ( line_item_legal_entity like '%AWS%' AND line_item_line_item_description like '%RHEL%' ) OR ( line_item_legal_entity like '%AWS%' AND product_product_name like '%Red Hat%' ) OR ( line_item_legal_entity like '%Amazon Web Services%' AND product_product_name like '%Red Hat%' ) AND year = '2024' AND month = '07'", "SELECT column_name FROM information_schema.columns WHERE table_name = '<your_export_name>' AND column_name LIKE 'resource_tags_%';", "SELECT * FROM <your_export_name> WHERE ( line_item_product_code = 'AmazonEC2' AND strpos(lower(<rhel_tag_column_name>), 'com_redhat_rhel') > 0 ) AND year = '<year>' AND month = '<month>'", "Athena query query = f\"SELECT * FROM {database}.{export_name} WHERE (line_item_product_code = 'AmazonEC2' AND strpos(lower(<rhel_tag_column_name>), 'com_redhat_rhel') > 0) AND year = '{year}' AND month = '{month}'\"", "last_month = now.replace(day=1) - timedelta(days=1) year = last_month.strftime(\"%Y\") month = last_month.strftime(\"%m\") day = last_month.strftime(\"%d\") file_name = 'finalized-data.json'", "file_name = 'finalized_data.json'" ]
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html-single/integrating_amazon_web_services_aws_data_into_cost_management/index
Part V. Deprecated Functionality
Part V. Deprecated Functionality This part provides an overview of functionality that has been deprecated in all minor releases up to Red Hat Enterprise Linux 7.3. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 7. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/part-red_hat_enterprise_linux-7.3_release_notes-deprecated_functionality
Configure Red Hat Quay
Configure Red Hat Quay Red Hat Quay 3.9 Customizing Red Hat Quay using configuration options Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/configure_red_hat_quay/index
Chapter 4. Examples
Chapter 4. Examples This chapter demonstrates the use of AMQ C++ through example programs. For more examples, see the AMQ C++ example suite and the Qpid Proton C++ examples . Note The code presented in this guide uses C++11 features. AMQ C++ is also compatible with C++03, but the code requires minor modifications. 4.1. Sending messages This client program connects to a server using <connection-url> , creates a sender for target <address> , sends a message containing <message-body> , closes the connection, and exits. Example: Sending messages #include <proton/connection.hpp> #include <proton/container.hpp> #include <proton/message.hpp> #include <proton/messaging_handler.hpp> #include <proton/sender.hpp> #include <proton/target.hpp> #include <iostream> #include <string> struct send_handler : public proton::messaging_handler { std::string conn_url_ {}; std::string address_ {}; std::string message_body_ {}; void on_container_start(proton::container& cont) override { cont.connect(conn_url_); // To connect with a user and password: // // proton::connection_options opts {}; // opts.user("<user>"); // opts.password("<password>"); // // cont.connect(conn_url_, opts); } void on_connection_open(proton::connection& conn) override { conn.open_sender(address_); } void on_sender_open(proton::sender& snd) override { std::cout << "SEND: Opened sender for target address '" << snd.target().address() << "'\n"; } void on_sendable(proton::sender& snd) override { proton::message msg {message_body_}; snd.send(msg); std::cout << "SEND: Sent message '" << msg.body() << "'\n"; snd.close(); snd.connection().close(); } }; int main(int argc, char** argv) { if (argc != 4) { std::cerr << "Usage: send <connection-url> <address> <message-body>\n"; return 1; } send_handler handler {}; handler.conn_url_ = argv[1]; handler.address_ = argv[2]; handler.message_body_ = argv[3]; proton::container cont {handler}; try { cont.run(); } catch (const std::exception& e) { std::cerr << e.what() << "\n"; return 1; } return 0; } Running the example To run the example program, copy it to a local file, compile it, and execute it from the command line. For more information, see Chapter 3, Getting started . USD g++ send.cpp -o send -std=c++11 -lstdc++ -lqpid-proton-cpp USD ./send amqp://localhost queue1 hello 4.2. Receiving messages This client program connects to a server using <connection-url> , creates a receiver for source <address> , and receives messages until it is terminated or it reaches <count> messages. Example: Receiving messages #include <proton/connection.hpp> #include <proton/container.hpp> #include <proton/delivery.hpp> #include <proton/message.hpp> #include <proton/messaging_handler.hpp> #include <proton/receiver.hpp> #include <proton/source.hpp> #include <iostream> #include <string> struct receive_handler : public proton::messaging_handler { std::string conn_url_ {}; std::string address_ {}; int desired_ {0}; int received_ {0}; void on_container_start(proton::container& cont) override { cont.connect(conn_url_); // To connect with a user and password: // // proton::connection_options opts {}; // opts.user("<user>"); // opts.password("<password>"); // // cont.connect(conn_url_, opts); } void on_connection_open(proton::connection& conn) override { conn.open_receiver(address_); } void on_receiver_open(proton::receiver& rcv) override { std::cout << "RECEIVE: Opened receiver for source address '" << rcv.source().address() << "'\n"; } void on_message(proton::delivery& dlv, proton::message& msg) override { std::cout << "RECEIVE: Received message '" << msg.body() << "'\n"; received_++; if (received_ == desired_) { dlv.receiver().close(); dlv.connection().close(); } } }; int main(int argc, char** argv) { if (argc != 3 && argc != 4) { std::cerr << "Usage: receive <connection-url> <address> [<message-count>]\n"; return 1; } receive_handler handler {}; handler.conn_url_ = argv[1]; handler.address_ = argv[2]; if (argc == 4) { handler.desired_ = std::stoi(argv[3]); } proton::container cont {handler}; try { cont.run(); } catch (const std::exception& e) { std::cerr << e.what() << "\n"; return 1; } return 0; } Running the example To run the example program, copy it to a local file, compile it, and execute it from the command line. For more information, see Chapter 3, Getting started . USD g++ receive.cpp -o receive -std=c++11 -lstdc++ -lqpid-proton-cpp USD ./receive amqp://localhost queue1
[ "#include <proton/connection.hpp> #include <proton/container.hpp> #include <proton/message.hpp> #include <proton/messaging_handler.hpp> #include <proton/sender.hpp> #include <proton/target.hpp> #include <iostream> #include <string> struct send_handler : public proton::messaging_handler { std::string conn_url_ {}; std::string address_ {}; std::string message_body_ {}; void on_container_start(proton::container& cont) override { cont.connect(conn_url_); // To connect with a user and password: // // proton::connection_options opts {}; // opts.user(\"<user>\"); // opts.password(\"<password>\"); // // cont.connect(conn_url_, opts); } void on_connection_open(proton::connection& conn) override { conn.open_sender(address_); } void on_sender_open(proton::sender& snd) override { std::cout << \"SEND: Opened sender for target address '\" << snd.target().address() << \"'\\n\"; } void on_sendable(proton::sender& snd) override { proton::message msg {message_body_}; snd.send(msg); std::cout << \"SEND: Sent message '\" << msg.body() << \"'\\n\"; snd.close(); snd.connection().close(); } }; int main(int argc, char** argv) { if (argc != 4) { std::cerr << \"Usage: send <connection-url> <address> <message-body>\\n\"; return 1; } send_handler handler {}; handler.conn_url_ = argv[1]; handler.address_ = argv[2]; handler.message_body_ = argv[3]; proton::container cont {handler}; try { cont.run(); } catch (const std::exception& e) { std::cerr << e.what() << \"\\n\"; return 1; } return 0; }", "g++ send.cpp -o send -std=c++11 -lstdc++ -lqpid-proton-cpp ./send amqp://localhost queue1 hello", "#include <proton/connection.hpp> #include <proton/container.hpp> #include <proton/delivery.hpp> #include <proton/message.hpp> #include <proton/messaging_handler.hpp> #include <proton/receiver.hpp> #include <proton/source.hpp> #include <iostream> #include <string> struct receive_handler : public proton::messaging_handler { std::string conn_url_ {}; std::string address_ {}; int desired_ {0}; int received_ {0}; void on_container_start(proton::container& cont) override { cont.connect(conn_url_); // To connect with a user and password: // // proton::connection_options opts {}; // opts.user(\"<user>\"); // opts.password(\"<password>\"); // // cont.connect(conn_url_, opts); } void on_connection_open(proton::connection& conn) override { conn.open_receiver(address_); } void on_receiver_open(proton::receiver& rcv) override { std::cout << \"RECEIVE: Opened receiver for source address '\" << rcv.source().address() << \"'\\n\"; } void on_message(proton::delivery& dlv, proton::message& msg) override { std::cout << \"RECEIVE: Received message '\" << msg.body() << \"'\\n\"; received_++; if (received_ == desired_) { dlv.receiver().close(); dlv.connection().close(); } } }; int main(int argc, char** argv) { if (argc != 3 && argc != 4) { std::cerr << \"Usage: receive <connection-url> <address> [<message-count>]\\n\"; return 1; } receive_handler handler {}; handler.conn_url_ = argv[1]; handler.address_ = argv[2]; if (argc == 4) { handler.desired_ = std::stoi(argv[3]); } proton::container cont {handler}; try { cont.run(); } catch (const std::exception& e) { std::cerr << e.what() << \"\\n\"; return 1; } return 0; }", "g++ receive.cpp -o receive -std=c++11 -lstdc++ -lqpid-proton-cpp ./receive amqp://localhost queue1" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_cpp_client/examples
Chapter 1. Network APIs
Chapter 1. Network APIs 1.1. AdminNetworkPolicy [policy.networking.k8s.io/v1alpha1] Description AdminNetworkPolicy is a cluster level resource that is part of the AdminNetworkPolicy API. Type object 1.2. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1] Description AdminPolicyBasedExternalRoute is a CRD allowing the cluster administrators to configure policies for external gateway IPs to be applied to all the pods contained in selected namespaces. Egress traffic from the pods that belong to the selected namespaces to outside the cluster is routed through these external gateway IPs. Type object 1.3. BaselineAdminNetworkPolicy [policy.networking.k8s.io/v1alpha1] Description BaselineAdminNetworkPolicy is a cluster level resource that is part of the AdminNetworkPolicy API. Type object 1.4. CloudPrivateIPConfig [cloud.network.openshift.io/v1] Description CloudPrivateIPConfig performs an assignment of a private IP address to the primary NIC associated with cloud VMs. This is done by specifying the IP and Kubernetes node which the IP should be assigned to. This CRD is intended to be used by the network plugin which manages the cluster network. The spec side represents the desired state requested by the network plugin, and the status side represents the current state that this CRD's controller has executed. No users will have permission to modify it, and if a cluster-admin decides to edit it for some reason, their changes will be overwritten the time the network plugin reconciles the object. Note: the CR's name must specify the requested private IP address (can be IPv4 or IPv6). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. EgressFirewall [k8s.ovn.org/v1] Description EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. Type object 1.6. EgressIP [k8s.ovn.org/v1] Description EgressIP is a CRD allowing the user to define a fixed source IP for all egress traffic originating from any pods which match the EgressIP resource according to its spec definition. Type object 1.7. EgressQoS [k8s.ovn.org/v1] Description EgressQoS is a CRD that allows the user to define a DSCP value for pods egress traffic on its namespace to specified CIDRs. Traffic from these pods will be checked against each EgressQoSRule in the namespace's EgressQoS, and if there is a match the traffic is marked with the relevant DSCP value. Type object 1.8. EgressService [k8s.ovn.org/v1] Description EgressService is a CRD that allows the user to request that the source IP of egress packets originating from all of the pods that are endpoints of the corresponding LoadBalancer Service would be its ingress IP. In addition, it allows the user to request that egress packets originating from all of the pods that are endpoints of the LoadBalancer service would use a different network than the main one. Type object 1.9. Endpoints [v1] Description Endpoints is a collection of endpoints that implement the actual service. Example: Type object 1.10. EndpointSlice [discovery.k8s.io/v1] Description EndpointSlice represents a subset of the endpoints that implement a service. For a given service there may be multiple EndpointSlice objects, selected by labels, which must be joined to produce the full set of endpoints. Type object 1.11. EgressRouter [network.operator.openshift.io/v1] Description EgressRouter is a feature allowing the user to define an egress router that acts as a bridge between pods and external systems. The egress router runs a service that redirects egress traffic originating from a pod or a group of pods to a remote external system or multiple destinations as per configuration. It is consumed by the cluster-network-operator. More specifically, given an EgressRouter CR with <name>, the CNO will create and manage: - A service called <name> - An egress pod called <name> - A NAD called <name> Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). EgressRouter is a single egressrouter pod configuration object. Type object 1.12. Ingress [networking.k8s.io/v1] Description Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Type object 1.13. IngressClass [networking.k8s.io/v1] Description IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. Type object 1.14. IPPool [whereabouts.cni.cncf.io/v1alpha1] Description IPPool is the Schema for the ippools API Type object 1.15. MultiNetworkPolicy [k8s.cni.cncf.io/v1beta1] Description MultiNetworkPolicy is a CRD schema to provide NetworkPolicy mechanism for net-attach-def which is specified by the Network Plumbing Working Group. MultiNetworkPolicy is identical to Kubernetes NetworkPolicy, See: https://kubernetes.io/docs/concepts/services-networking/network-policies/ . Type object 1.16. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1] Description NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing Working Group to express the intent for attaching pods to one or more logical or physical networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec Type object 1.17. NetworkPolicy [networking.k8s.io/v1] Description NetworkPolicy describes what network traffic is allowed for a set of Pods Type object 1.18. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] Description OverlappingRangeIPReservation is the Schema for the OverlappingRangeIPReservations API Type object 1.19. PodNetworkConnectivityCheck [controlplane.operator.openshift.io/v1alpha1] Description PodNetworkConnectivityCheck Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.20. Route [route.openshift.io/v1] Description A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints. Once a route is created, the host field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts. Routers are subject to additional customization and may support additional controls via the annotations field. Because administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen. To enable HTTP/2 ALPN on a route it requires a custom (non-wildcard) certificate. This prevents connection coalescing by clients, notably web browsers. We do not support HTTP/2 ALPN on routes that use the default certificate because of the risk of connection re-use/coalescing. Routes that do not have their own custom certificate will not be HTTP/2 ALPN-enabled on either the frontend or the backend. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.21. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object
[ "Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/network-apis
2.12. OProfile
2.12. OProfile OProfile is a system-wide performance monitoring tool. It uses the processor's dedicated performance monitoring hardware to retrieve information about the kernel and system executables to determine the frequency of certain events, such as when memory is referenced, the number of second-level cache requests, and the number of hardware requests received. OProfile can also be used to determine processor usage, and to determine which applications and services are used most often. However, OProfile does have several limitations: Performance monitoring samples may not be precise. Because the processor may execute instructions out of order, samples can be recorded from a nearby instruction instead of the instruction that triggered the interrupt. OProfile expects processes to start and stop multiple times. As such, samples from multiple runs are allowed to accumulate. You may need to clear the sample data from runs. OProfile focuses on identifying problems with processes limited by CPU access. It is therefore not useful for identifying processes that are sleeping while they wait for locks on other events. For more detailed information about OProfile, see Section A.14, "OProfile" , or the Red Hat Enterprise Linux 7 System Administrator's Guide . Alternatively, refer to the documentation on your system, located in /usr/share/doc/oprofile- version .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-oprofile
2.7. DDL Commands
2.7. DDL Commands 2.7.1. DDL Commands JBoss Data Virtualization supports a subset of DDL to create/drop temporary tables and to manipulate procedure and view definitions at runtime. It is not currently possible to arbitrarily drop/create non-temporary metadata entries. See Section 11.1, "DDL Metadata" for DDL usage to define schemas within a VDB. Note A MetadataRepository must be configured to make a non-temporary metadata update persistent. See Runtime Metadata Updates in Red Hat JBoss Data Virtualization Development Guide: Server Development for more information. 2.7.2. Local and Global Temporary Tables Red Hat JBoss Data Virtualization supports creating temporary tables. Temporary tables are dynamically created, but are treated as any other physical table. 2.7.2.1. Local Temporary Tables Local temporary tables can be defined implicitly by referencing them in an INSERT statement or explicitly with a CREATE TABLE statement. Implicitly created temporary tables must have a name that starts with '#'. Creation syntax: Local temporary tables can be defined explicitly with a CREATE TABLE statement: Use the SERIAL data type to specify a NOT NULL and auto-incrementing INTEGER column. The starting value of a SERIAL column is 1. Local temporary tables can be defined implicitly by referencing them in an INSERT statement. Note If #name does not exist, it will be defined using the given column names and types from the value expressions. Note If #name does not exist, it will be defined using the target column names and the types from the query derived columns. If target columns are not supplied, the column names will match the derived column names from the query. Drop syntax: DROP TABLE name The following example is a series of statements that loads a temporary table with data from two sources, and with a manually inserted record, and then uses that temp table in a subsequent query. 2.7.2.2. Global Temporary Tables You can create global temporary tables in Teiid Designer or through the metadata you supply at deploy time. Unlike local temporary tables, you cannot create them at runtime. Your global temporary tables share a common definition through a schema entry. However, a new instance of the temporary table is created in each session. The table is then dropped when the session ends. (There is no explicit drop support.) A common use for a global temporary table is to pass results into and out of procedures. If you use the SERIAL data type, then each session's instance of the global temporary table will have its own sequence. You must explicitly specify UPDATABLE if you want to update the temporary table. 2.7.2.3. Common Features Here are the features of global and local temporary tables: Primary Key Support All key columns must be comparable. If you use a primary key, it will create a clustered index that supports search improvements for comparison , in , like , and order by . You can use Null as a primary key value, but there must only be one row that has an all-null key. Transaction Support THere is a READ_UNCOMMITED transaction isolation level. There are no locking mechanisms available to support higher isolation levels and the result of a rollback may be inconsistent across multiple transactions. If concurrent transactions are not associated with the same local temporary table or session, then the transaction isolation level is effectively serializable. If you want full consistency with local temporary tables, then only use a connection with 1 transaction at a time. This mode of operation is ensured by connection pooling that tracks connections by transaction. Limitations With the CREATE TABLE syntax only basic table definition (column name and type information) and an optional primary key are supported. For global temporary tables additional metadata in the create statement is effectively ignored when creating the temporary table instance - but may still be utilized by planning similar to any other table entry. You can use ON COMMIT PRESERVE ROWS . No other ON COMMIT clause is supported. You cannot use the "drop behavior" option in the drop statement. Temporary tables are not fail-over safe. Non-inlined LOB values (XML, CLOB, BLOB) are tracked by reference rather than by value in a temporary table. If you insert LOB values from external sources in your temporary table, they may become unreadable when the associated statement or connection is closed. 2.7.3. Foreign Temporary Tables Unlike a local temporary table, a foreign temporary table is a reference to an actual source table that is created at runtime rather than during the metadata load. A foreign temporary table requires explicit creation syntax: Where the table creation body syntax is the same as a standard CREATE FOREIGN TABLE DDL statement (see Section 11.1, "DDL Metadata" ). In general usage of DDL OPTION, clauses may be required to properly access the source table, including setting the name in source, updatability, native types, etc. The schema name must specify an existing schema/model in the VDB. The table will be accessed as if it is on that source, however within JBoss Data Virtualization the temporary table will still be scoped the same as a non-foreign temporary table. This means that the foreign temporary table will not belong to a JBoss Data Virtualization schema and will be scoped to the session or procedure block where created. The DROP syntax for a foreign temporary table is the same as for a non-foreign temporary table. Neither a CREATE nor a corresponding DROP of a foreign temporary table issue a pushdown command, rather this mechanism simply exposes a source table for use within JBoss Data Virtualization on a temporary basis. There are two usage scenarios for a FOREIGN TEMPORARY TABLE. The first is to dynamically access additional tables on the source. The other is to replace the usage of a JBoss Data Virtualization local temporary table for performance reasons. The usage pattern for the latter case would look like: Note the usage of the native procedure to pass source specific CREATE ddl to the source. JBoss Data Virtualization does not currently attempt to pushdown a source creation of a temporary table based upon the CREATE statement. Some other mechanism, such as the native procedure shown above, must be used to first create the table. Also note the table is explicitly marked as updatable, since DDL defined tables are not updatable by default. The source's handling of temporary tables must also be understood to make this work as intended. Sources that use the same GLOBAL table definition for all sessions while scoping the data to be session specific (such as Oracle) or sources that support session scoped temporary tables (such as PostgreSQL) will work if accessed under a transaction. A transaction is necessary because: the source on commit behavior (most likely DELETE ROWS or DROP) will ensure clean-up. Keep in mind that a JBoss Data Virtualization DROP does not issue a source command and is not guaranteed to occur (in some exception cases, loss of DB connectivity, hard shutdown, etc.). the source pool when using track connections by transaction will ensure that multiple uses of that source by JBoss Data Virtualization will use the same connection/session and thus the same temporary table and data. Note Since the ON COMMIT clause is not yet supported by JBoss Data Virtualization, it is important to consider that the source table ON COMMIT behavior will likely be different that the default, PRESERVE ROWS, for JBoss Data Virtualization local temporary tables. 2.7.4. Alter View Usage: Syntax Rules: The alter query expression may be prefixed with a cache hint for materialized view definitions. The hint will take effect the time the materialized view table is loaded. 2.7.5. Alter Procedure Usage: Syntax Rules: The alter block should not include 'CREATE VIRTUAL PROCEDURE' The alter block may be prefixed with a cache hint for cached procedures. 2.7.6. Create Trigger Usage: Syntax Rules: The target, name, must be an updatable view. An INSTEAD OF TRIGGER must not yet exist for the given event. Triggers are not yet true schema objects. They are scoped only to their view and have no name. Limitations: There is no corresponding DROP operation. See Section 2.7.7, "Alter Trigger" for enabling/disabling an existing trigger. 2.7.7. Alter Trigger Usage: Syntax Rules: The target, name, must be an updatable view. Triggers are not yet true schema objects. They are scoped only to their view and have no name. Update Procedures must already exist for the given trigger event. See Section 2.10.6, "Update Procedures" . Note If the default inherent update is chosen in Teiid Designer, any SQL associated with update (shown in a greyed out text box) is not part of the VDB and cannot be enabled with an alter trigger statement.
[ "CREATE LOCAL TEMPORARY TABLE name (column type [NOT NULL], ... [PRIMARY KEY (column, ...)])", "INSERT INTO #name (column, ...) VALUES (value, ...)", "INSERT INTO #name [(column, ...)] select c1, c2 from t", "CREATE LOCAL TEMPORARY TABLE TEMP (a integer, b integer, c integer); INSERT * INTO temp FROM Src1; INSERT * INTO temp FROM Src2; INSERT INTO temp VALUES (1,2,3); SELECT a,b,c FROM Src3, temp WHERE Src3.a = temp.b;", "CREATE GLOBAL TEMPORARY TABLE name (column type [NOT NULL], ... [PRIMARY KEY (column, ...)]) OPTIONS (UPDATABLE 'true')", "CREATE FOREIGN TEMPORARY TABLE name ... ON schema", "//- create the source table call source.native(\"CREATE GLOBAL TEMPORARY TABLE name IF NOT EXISTS ON COMMIT DELETE ROWS\"); //- bring the table into JBoss Data Virtualization CREATE FOREIGN TEMPORARY TABLE name ... OPTIONS (UPDATABLE true) //- use the table //- forget the table DROP TABLE name", "ALTER VIEW name AS queryExpression", "ALTER PROCEDURE name AS block", "CREATE TRIGGER ON name INSTEAD OF INSERT|UPDATE|DELETE AS FOR EACH ROW block", "ALTER TRIGGER ON name INSTEAD OF INSERT|UPDATE|DELETE (AS FOR EACH ROW block) | (ENABLED|DISABLED)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-ddl_commands
Chapter 5. JBoss EAP Security
Chapter 5. JBoss EAP Security JBoss EAP offers the ability to configure security for its own interfaces and services as well as provide security for applications that are running on it. See the Security Architecture guide for an overview of general security concepts as well as JBoss EAP-specific security concepts. See How to Configure Server Security for information on securing JBoss EAP itself. See How to Configure Identity Management for information on providing security for applications deployed to JBoss EAP. See How to Set Up SSO with Kerberos for information on configuring single sign-on for JBoss EAP using Kerberos. See How To Set Up SSO with SAML v2 for information on configuring single sign-on for JBoss EAP using SAML v2.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/jboss_eap_security
Preface
Preface Use accelerators, such as NVIDIA GPUs, AMD GPUs, and Intel Gaudi AI accelerators, to optimize the performance of your end-to-end data science workflows.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_accelerators/pr01
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_hyperconverged_infrastructure/making-open-source-more-inclusive
Chapter 2. Apache Maven and Red Hat Process Automation Manager Spring Boot applications
Chapter 2. Apache Maven and Red Hat Process Automation Manager Spring Boot applications Apache Maven is a distributed build automation tool used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model (POM) files to define projects and manage the build process. POM files describe the module and component dependencies, build order, and targets for the resulting project packaging and output using an XML file. This ensures that the project is built in a correct and uniform manner. A Maven repository stores Java libraries, plug-ins, and other build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be private and internal within a company to share common artifacts among development teams. Repositories are also available from third parties. You can use the online Maven repository with your Spring Boot projects or you can download the Red Hat Process Automation Manager Maven repository. The recommended approach is to use the online Maven repository with your Spring Boot projects. Maven settings used with a repository manager or repository on a shared server provide better control and manageability of projects.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/maven-con_business-applications
9.7. Storage Devices
9.7. Storage Devices Storage devices and storage pools can use the block device drivers to attach storage devices to virtualized guests. Note that the storage drivers are not storage devices. The drivers are used to attach a backing storage device, file or storage pool volume to a virtualized guest. The backing storage device can be any supported type of storage device, file, or storage pool volume. The IDE driver exposes an emulated block device to guests. The emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or virtualized IDE CD-ROM drives to each virtualized guest. The emulated IDE driver is also used to provide virtualized DVD-ROM drives. The VirtIO driver exposes a para-virtualized block device to guests. The para-virtualized block driver is a driver for all storage devices supported by the hypervisor attached to the virtualized guest (except for floppy disk drives, which must be emulated).
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/storage_devices
Chapter 11. Lifecyle bucket configuration in Multicloud Object Gateway
Chapter 11. Lifecyle bucket configuration in Multicloud Object Gateway Multicloud Object Gateway (MCG) lifecycle provides a way to reduce storage costs due to accumulated data objects. Deletion of expired objects is a simplified way that enables handling of unused data. Data expiration is a part of Amazon Web Services (AWS) lifecycle management and sets an expiration date for automatic deletion. The minimal time resolution of the lifecycle expiration is one day. For more information, see Expiring objects . AWS S3 API is used to configure lifecyle bucket in MCG. For information about the data bucket APIs and their support level, see Support of Multicloud Object Gateway data bucket APIs . There are a few limitations with the expiratation rule API for MCG in comaparison with AWS: ExpiredObjectDeleteMarker is accepted but it is not processed. No option to define specific non-current version's expiration conditions
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/con_lifecycle-bucket-configuration-in-multicloud-object-gateway_rhodf
Chapter 59. Installation and Booting
Chapter 59. Installation and Booting FIPS mode unsupported when installing from an HTTPS kickstart source Installation images do not support FIPS mode during installation with an HTTPS kickstart source. As a consequence, it is currently impossible to to install a system with the fips=1 and inst.ks=https://<location>/ks.cfg options added to the command line. (BZ# 1341280 ) PXE boot with UEFI and IPv6 displays the GRUB2 shell instead of the operating system selection menu When the Pre-Boot Execution Environment (PXE) starts on a client configured with UEFI and IPv6, the boot menu configured in the /boot/grub/grub.cfg file is not displayed. After a timeout, the GRUB2 shell is displayed instead of the configured operating system selection menu. (BZ#1154226) Specifying a driverdisk partition with non-alphanumeric characters generates an invalid output Kickstart file When installing Red Hat Enterprise Linux using the Anaconda installer, you can add a driver disk by including a path to the partition containing the driver disk in the Kickstart file. At present, if you specify the partition by LABEL or CDLABEL which has non-alphanumeric characters in it, for example: the output Kickstart file created during the Anaconda installation will contain incorrect information. To work around this problem, use only alphanumeric characters when specifying the partition by LABEL or CDLABEL. (BZ# 1452770 ) The Scientific Computing variant is missing packages required for certain security profiles When installing the Red Hat Enterprise Linux for Scientific Computing variant, also known as Compute Node, you can select a security profile similarly to any other variant's installation process. However, since this variant is meant to be minimal, it is missing packages which are required by certain profiles, such as United States Government Configuration Baseline . If you select this profile, the installer displays a warning that some packages are missing. The warning allows you to continue the installation despite missing packages, which can be used to work around the problem. The installation will complete normally, however, note that if you install the system despite the warning, and then attempt to run a security scan after the installation, the scan will report failing rules due to these missing packages. This behavior is expected. (BZ#1462647)
[ "driverdisk \"CDLABEL=Fedora 23 x86_64:/path/to/rpm\"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/known_issues_installation_and_booting
Chapter 1. Installation methods
Chapter 1. Installation methods You can install OpenShift Container Platform on Amazon Web Services (AWS) using installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. You can also install OpenShift Container Platform on a single node, which is a specialized installation method that is ideal for edge computing environments. 1.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on AWS : You can install OpenShift Container Platform on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on AWS : You can install a customized cluster on AWS infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on AWS with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on AWS in a restricted network : You can install OpenShift Container Platform on AWS on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. Installing a cluster on an existing Virtual Private Cloud : You can install OpenShift Container Platform on an existing AWS Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing AWS VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on AWS into a government or secret region : OpenShift Container Platform can be deployed into AWS regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads in the cloud. 1.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on AWS infrastructure that you provision, by using one of the following methods: Installing a cluster on AWS infrastructure that you provide : You can install OpenShift Container Platform on AWS infrastructure that you provide. You can use the provided CloudFormation templates to create stacks of AWS resources that represent each of the components required for an OpenShift Container Platform installation. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure : You can install OpenShift Container Platform on AWS infrastructure that you provide by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the AWS APIs. 1.3. Installing a cluster on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the requirements for installing on a single node , and the additional requirements for installing single-node OpenShift on a cloud provider . After addressing the requirements for single node installation, use the Installing a customized cluster on AWS procedure to install the cluster. The installing single-node OpenShift manually section contains an exemplary install-config.yaml file when installing an OpenShift Container Platform cluster on a single node. 1.4. Additional resources Installation process
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_aws/preparing-to-install-on-aws
Lightspeed
Lightspeed OpenShift Container Platform 4.16 OpenShift Lightspeed overview Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/lightspeed/index
8.2. Monitoring and Diagnosing Performance Problems
8.2. Monitoring and Diagnosing Performance Problems Red Hat Enterprise Linux 7 provides a number of tools that are useful for monitoring system performance and diagnosing performance problems related to I/O and file systems and their configuration. This section outlines the available tools and gives examples of how to use them to monitor and diagnose I/O and file system related performance issues. 8.2.1. Monitoring System Performance with vmstat Vmstat reports on processes, memory, paging, block I/O, interrupts, and CPU activity across the entire system. It can help administrators determine whether the I/O subsystem is responsible for any performance issues. The information most relevant to I/O performance is in the following columns: si Swap in, or reads from swap space, in KB. so Swap out, or writes to swap space, in KB. bi Block in, or block write operations, in KB. bo Block out, or block read operations, in KB. wa The portion of the queue that is waiting for I/O operations to complete. Swap in and swap out are particularly useful when your swap space and your data are on the same device, and as indicators of memory usage. Additionally, the free, buff, and cache columns can help identify write-back frequency. A sudden drop in cache values and an increase in free values indicates that write-back and page cache invalidation has begun. If analysis with vmstat shows that the I/O subsystem is responsible for reduced performance, administrators can use iostat to determine the responsible I/O device. vmstat is provided by the procps-ng package. For detailed information about using vmstat , see the man page: 8.2.2. Monitoring I/O Performance with iostat Iostat is provided by the sysstat package. It reports on I/O device load in your system. If analysis with vmstat shows that the I/O subsystem is responsible for reduced performance, you can use iostat to determine the I/O device responsible. You can focus the output of iostat reports on a specific device by using the parameters defined in the iostat man page: 8.2.2.1. Detailed I/O Analysis with blktrace Blktrace provides detailed information about how time is spent in the I/O subsystem. The companion utility blkparse reads the raw output from blktrace and produces a human readable summary of input and output operations recorded by blktrace . For more detailed information about this tool, see the blktrace (8) and blkparse (1) man pages: 8.2.2.2. Analyzing blktrace Output with btt The btt utility is provided as part of the blktrace package. It analyzes blktrace output and displays the amount of time that data spends in each area of the I/O stack, making it easier to spot bottlenecks in the I/O subsystem. Some of the important events tracked by the blktrace mechanism and analyzed by btt are: Queuing of the I/O event ( Q ) Dispatch of the I/O to the driver event ( D ) Completion of I/O event ( C ) You can include or exclude factors involved with I/O performance issues by examining combinations of events. To inspect the timing of sub-portions of each I/O device, look at the timing between captured blktrace events for the I/O device. For example, the following command reports the total amount of time spent in the lower part of the kernel I/O stack ( Q2C ), which includes scheduler, driver, and hardware layers, as an average under await time: If the device takes a long time to service a request ( D2C ), the device may be overloaded, or the workload sent to the device may be sub-optimal. If block I/O is queued for a long time before being dispatched to the storage device ( Q2G ), it may indicate that the storage in use is unable to serve the I/O load. For example, a LUN queue full condition has been reached and is preventing the I/O from being dispatched to the storage device. Looking at the timing across adjacent I/O can provide insight into some types of bottleneck situations. For example, if btt shows that the time between requests being sent to the block layer ( Q2Q ) is larger than the total time that requests spent in the block layer ( Q2C ), this indicates that there is idle time between I/O requests and the I/O subsystem may not be responsible for performance issues. Comparing Q2C values across adjacent I/O can show the amount of variability in storage service time. The values can be either: fairly consistent with a small range, or highly variable in the distribution range, which indicates a possible storage device side congestion issue. For more detailed information about this tool, see the btt (1) man page: 8.2.2.3. Analyzing blktrace Output with iowatcher The iowatcher tool can use blktrace output to graph I/O over time. It focuses on the Logical Block Address (LBA) of disk I/O, throughput in megabytes per second, the number of seeks per second, and I/O operations per second. This can help to identify when you are hitting the operations-per-second limit of a device. For more detailed information about this tool, see the iowatcher (1) man page. 8.2.3. Storage Monitoring with SystemTap The Red Hat Enterprise Linux 7 SystemTap Beginners Guide includes several sample scripts that are useful for profiling and monitoring storage performance. The following SystemTap example scripts relate to storage performance and may be useful in diagnosing storage or file system performance problems. By default they are installed to the /usr/share/doc/systemtap-client/examples/io directory. disktop.stp Checks the status of reading/writing disk every 5 seconds and outputs the top ten entries during that period. iotime.stp Prints the amount of time spent on read and write operations, and the number of bytes read and written. traceio.stp Prints the top ten executables based on cumulative I/O traffic observed, every second. traceio2.stp Prints the executable name and process identifier as reads and writes to the specified device occur. inodewatch.stp Prints the executable name and process identifier each time a read or write occurs to the specified inode on the specified major/minor device. inodewatch2.stp Prints the executable name, process identifier, and attributes each time the attributes are changed on the specified inode on the specified major/minor device.
[ "man vmstat", "man iostat", "man blktrace", "man blkparse", "iostat -x [...] Device: await r_await w_await vda 16.75 0.97 162.05 dm-0 30.18 1.13 223.45 dm-1 0.14 0.14 0.00 [...]", "man btt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-storage_and_file_systems-monitoring_and_diagnosing_performance_problems
Chapter 1. GFS2 Overview
Chapter 1. GFS2 Overview The Red Hat GFS2 file system is a 64-bit symmetric cluster file system which provides a shared namespace and manages coherency between multiple nodes sharing a common block device. A GFS2 file system is intended to provide a feature set which is as close as possible to a local file system, while at the same time enforcing full cluster coherency between nodes. In a few cases, the Linux file system API does not allow the clustered nature of GFS2 to be totally transparent; for example, programs using Posix locks in GFS2 should avoid using the GETLK function since, in a clustered environment, the process ID may be for a different node in the cluster. In most cases however, the functionality of a GFS2 file system is identical to that of a local file system. The Red Hat Enterprise Linux (RHEL) Resilient Storage Add-On provides GFS2, and it depends on the RHEL High Availability Add-On to provide the cluster management required by GFS2. For information about the High Availability Add-On see Configuring and Managing a Red Hat Cluster . The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes. To get the best performance from GFS2, it is important to take into account the performance considerations which stem from the underlying design. Just like a local file system, GFS2 relies on the page cache in order to improve performance by local caching of frequently used data. In order to maintain coherency across the nodes in the cluster, cache control is provided by the glock state machine. For more information on glocks and their performance implications, see Section 2.9, "GFS2 Node Locking" . This chapter provides some basic, abbreviated information as background to help you understand GFS2. 1.1. GFS2 Support Limits Table 1.1, "GFS2 Support Limits" summarizes the current maximum file system size and number of nodes that GFS2 supports. Table 1.1. GFS2 Support Limits Maximum number of node 16 (x86, Power8 on PowerVM) 4 (s390x under z/VM) Maximum file system size 100TB on all supported architectures GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. If your system requires larger GFS2 file systems than are currently supported, contact your Red Hat service representative. Note Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the Red Hat Enterprise Linux 7 release Red Hat does not support the use of GFS2 as a single-node file system. Red Hat does support a number of high-performance single node file systems which are optimized for single node and thus have generally lower overhead than a cluster file system. Red Hat recommends using these file systems in preference to GFS2 in cases where only a single node needs to mount the file system. Red Hat will continue to support single-node GFS2 file systems for mounting snapshots of cluster file systems (for example, for backup purposes). When determining the size of your file system, you should consider your recovery needs. Running the fsck.gfs2 command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk subsystem failure, recovery time is limited by the speed of your backup media. For information on the amount of memory the fsck.gfs2 command requires, see Section 3.10, "Repairing a GFS2 File System" . While a GFS2 file system may be used outside of LVM, Red Hat supports only GFS2 file systems that are created on a CLVM logical volume. CLVM is included in the Resilient Storage Add-On. It is a cluster-wide implementation of LVM, enabled by the CLVM daemon clvmd , which manages LVM logical volumes in a cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. For information on the LVM volume manager, see Logical Volume Manager Administration . Note When you configure a GFS2 file system as a cluster file system, you must ensure that all nodes in the cluster have access to the shared storage. Asymmetric cluster configurations in which some nodes have access to the shared storage and others do not are not supported. This does not require that all nodes actually mount the GFS2 file system itself.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/ch-overview-GFS2
4.3. Namespaces Section
4.3. Namespaces Section The Namespaces section is a collapsible area used to create and maintain the namespace mappings declared in the CND file. A namespace mapping consists of a unique prefix, a unique URI, and an optional comment. You can copy and paste Namespace mappings within the same CND editor or between different CND editors. Namespace mappings are edited using the Namespace Editor.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/namespaces_section
Chapter 20. Installation configuration
Chapter 20. Installation configuration 20.1. Customizing nodes Although directly making changes to OpenShift Container Platform nodes is discouraged, there are times when it is necessary to implement a required low-level security, redundancy, networking, or performance feature. Direct changes to OpenShift Container Platform nodes can be done by: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Creating an Ignition config that is passed to coreos-installer when installing bare-metal nodes. The following sections describe features that you might want to configure on your nodes in this way. 20.1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 20.1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 20.1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 20.1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.9.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 20.1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS is not supported. You need to do some low-level network configuration before the systems start. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 20.1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 20.1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 20.1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 20.1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.9.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 20.1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 20.1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2: This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor contained within a server. You can use this mode to prevent the boot disk data on a cluster node from being decrypted if the disk is removed from the server. Tang: Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents the data from being decrypted unless the nodes are on a secure network where the Tang servers can be accessed. Clevis is an automated decryption framework that is used to implement the decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. Note On versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or above, and disk encryption should be configured by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure and user-provisioned infrastructure deployments Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase so all data written to disk, from first boot forward, is encrypted Requires no user intervention for providing passphrases Uses AES-256-XTS encryption, or AES-256-CBC if FIPS mode is enabled 20.1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously, so that the boot disk data can be decrypted only if the TPM secure cryptoprocessor is present and the Tang servers can be accessed over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. For example, the threshold value of 2 in the following configuration can be reached by accessing the two Tang servers, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.9.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 luks: tpm2: true 1 tang: 2 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 3 openshift: fips: true 1 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 2 Include this section if you want to use one or more Tang servers. 3 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require both TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible for the threshold to be reached by using one of the encryption modes only. For example, if tpm2 is set to true and you specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers even if the TPM secure cryptoprocessor is not available. 20.1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure as long as one device remains available. Mirroring does not support replacement of a failed disk. To restore the mirror to a pristine, non-degraded state, reprovision the node. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 20.1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the BIOS on each node. This is required on most Dell systems. Check the manual for your computer. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command is used in this step only to generate a thumbprint of the exchange key. No data is being passed to the command for encryption at this point, so /dev/null is provided as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . Note RHEL 8 provides Clevis version 15, which uses the SHA-1 hash algorithm to generate thumbprints. Some other distributions provide Clevis version 17 or later, which use the SHA-256 hash algorithm for thumbprints. You must use a Clevis version that uses SHA-1 to create the thumbprint, to prevent Clevis binding issues when you install Red Hat Enterprise Linux CoreOS (RHCOS) on your OpenShift Container Platform cluster nodes. If the nodes are configured with static IP addressing, use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.9.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12 1 2 For control plane configurations, replace worker with master in both of these locations. 3 On ppc64le nodes, set this field to ppc64le . On all other nodes, this field can be omitted. 4 Include this section if you want to encrypt the root file system. For more details, see the About disk encryption section. 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information on this topic, see the Configuring an encryption threshold section. 10 Include this section if you want to mirror the boot disk. For more details, see About disk mirroring . 11 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 12 Include this directive to enable FIPS mode on your cluster. Important If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane config. In addition, if you are configuring disk encryption on a node with FIPS mode enabled, you must include the fips directive in the same Butane config, even if FIPS mode is also enabled in a separate manifest. Create a control plane or compute node manifest from the corresponding Butane config and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configs in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node. The following example starts a debug pod for the compute-1 node: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-essiv:sha256 cipher is used if FIPS mode is enabled. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 In the example, the /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 In the example, the /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices that are used by the software RAID device. List the file systems that are mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 20.1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.9.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/sda - /dev/sdb storage: disks: - device: /dev/sda partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/sdb partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.9.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 20.1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 20.1.6. Additional resources For information on Butane, see Creating machine configs with Butane . For information on FIPS support, see Support for FIPS cryptography . 20.2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 20.2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Allowlist the following registry URLs: URL Port Function registry.redhat.io 443, 80 Provides core container images access.redhat.com 443, 80 Provides core container images quay.io 443, 80 Provides core container images cdn.quay.io 443, 80 Provides core container images cdn01.quay.io 443, 80 Provides core container images cdn02.quay.io 443, 80 Provides core container images cdn03.quay.io 443, 80 Provides core container images sso.redhat.com 443, 80 The https://console.redhat.com/openshift site uses authentication from sso.redhat.com You can use the wildcard *.quay.io instead of cdn0[1-3].quay.io in your allowlist. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Allowlist any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443, 80 Required for Telemetry api.access.redhat.com 443, 80 Required for Telemetry infogw.api.openshift.com 443, 80 Required for Telemetry console.redhat.com/api/ingress , cloud.redhat.com/api/ingress 443, 80 Required for Telemetry and for insights-operator If you use Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that provide the cloud provider API and DNS for that cloud: Cloud URL Port Function AWS *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must allowlist the following URLs: 443, 80 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to determine the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443, 80 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443, 80 Allows the assignment of metadata about AWS resources in the form of tags. GCP *.googleapis.com 443, 80 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to determine the endpoints to allow for your APIs. accounts.google.com 443, 80 Required to access your GCP account. Azure management.azure.com 443, 80 Required to access Azure services and resources. Review the Azure REST API reference in the Azure documentation to determine the endpoints to allow for your APIs. *.blob.core.windows.net 443, 80 Required to download Ignition files. login.microsoftonline.com 443, 80 Required to access Azure services and resources. Review the Azure REST API reference in the Azure documentation to determine the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function mirror.openshift.com 443, 80 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. storage.googleapis.com/openshift-release 443, 80 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. *.apps.<cluster_name>.<base_domain> 443, 80 Required to access the default cluster routes unless you set an ingress wildcard during installation. quayio-production-s3.s3.amazonaws.com 443, 80 Required to access Quay image content in AWS. api.openshift.com 443, 80 Required both for your cluster token and to check if updates are available for the cluster. rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com , rhcos.mirror.openshift.com 443, 80 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. console.redhat.com/openshift 443, 80 Required for your cluster token. registry.access.redhat.com 443, 80 Required for odo CLI. sso.redhat.com 443, 80 The https://console.redhat.com/openshift site uses authentication from sso.redhat.com Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443, 80 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443, 80 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443, 80 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall.
[ "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane", "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane", "chmod +x butane", "echo USDPATH", "butane <butane_file>", "variant: openshift version: 4.9.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-custom.bu -o ./99-worker-custom.yaml", "oc create -f 99-worker-custom.yaml", "./openshift-install create manifests --dir <installation_directory>", "cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "cd kmods-via-containers/", "sudo make install", "sudo systemctl daemon-reload", "cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "cd kvc-simple-kmod", "cat simple-kmod.conf", "KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"", "sudo make install", "sudo kmods-via-containers build simple-kmod USD(uname -r)", "sudo systemctl enable [email protected] --now", "sudo systemctl status [email protected]", "● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "dmesg | grep 'Hello world'", "[ 6420.761332] Hello world from simple_kmod.", "sudo cat /proc/simple-procfs-kmod", "simple-procfs-kmod number = 0", "sudo spkut 44", "KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "FAKEROOT=USD(mktemp -d)", "cd kmods-via-containers", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd ../kvc-simple-kmod", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree", "variant: openshift version: 4.9.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true", "butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml", "oc create -f 99-simple-kmod.yaml", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "variant: openshift version: 4.9.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 luks: tpm2: true 1 tang: 2 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 3 openshift: fips: true", "sudo yum install clevis", "clevis-encrypt-tang '{\"url\":\"http://tang.example.com:7500\"}' < /dev/null > /dev/null 1", "The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1", "./openshift-install create manifests --dir <installation_directory> 1", "variant: openshift version: 4.9.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12", "butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml", "oc debug node/compute-1", "chroot /host", "cryptsetup status root", "/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write", "clevis luks list -d /dev/sda4 1", "1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1", "cat /proc/mdstat", "Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>", "mdadm --detail /dev/md126", "/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8", "mount | grep /dev/md", "/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)", "variant: openshift version: 4.9.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/sda - /dev/sdb storage: disks: - device: /dev/sda partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/sdb partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true", "variant: openshift version: 4.9.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true", "butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1", "variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/installation-configuration
Part III. Technology Previews
Part III. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 7.4. For information on Red Hat scope of support for Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/technology-previews
Chapter 4. Strategies for repartitioning a disk
Chapter 4. Strategies for repartitioning a disk There are different approaches to repartitioning a disk. These include: Unpartitioned free space is available. An unused partition is available. Free space in an actively used partition is available. Note The following examples are simplified for clarity and do not reflect the exact partition layout when actually installing Red Hat Enterprise Linux. 4.1. Using unpartitioned free space Partitions that are already defined and do not span the entire hard disk, leave unallocated space that is not part of any defined partition. The following diagram shows what this might look like. Figure 4.1. Disk with unpartitioned free space The first diagram represents a disk with one primary partition and an undefined partition with unallocated space. The second diagram represents a disk with two defined partitions with allocated space. An unused hard disk also falls into this category. The only difference is that all the space is not part of any defined partition. On a new disk, you can create the necessary partitions from the unused space. Most preinstalled operating systems are configured to take up all available space on a disk drive. 4.2. Using space from an unused partition In the following example, the first diagram represents a disk with an unused partition. The second diagram represents reallocating an unused partition for Linux. Figure 4.2. Disk with an unused partition To use the space allocated to the unused partition, delete the partition and then create the appropriate Linux partition instead. Alternatively, during the installation process, delete the unused partition and manually create new partitions. 4.3. Using free space from an active partition This process can be difficult to manage because an active partition, that is already in use, contains the required free space. In most cases, hard disks of computers with preinstalled software contain one larger partition holding the operating system and data. Warning If you want to use an operating system (OS) on an active partition, you must reinstall the OS. Be aware that some computers, which include pre-installed software, do not include installation media to reinstall the original OS. Check whether this applies to your OS before you destroy an original partition and the OS installation. To optimise the use of available free space, you can use the methods of destructive or non-destructive repartitioning. 4.3.1. Destructive repartitioning Destructive repartitioning destroys the partition on your hard drive and creates several smaller partitions instead. Backup any needed data from the original partition as this method deletes the complete contents. After creating a smaller partition for your existing operating system, you can: Reinstall software. Restore your data. Start your Red Hat Enterprise Linux installation. The following diagram is a simplified representation of using the destructive repartitioning method. Figure 4.3. Destructive repartitioning action on disk Warning This method deletes all data previously stored in the original partition. 4.3.2. Non-destructive repartitioning Non-destructive repartitioning resizes partitions, without any data loss. This method is reliable, however it takes longer processing time on large drives. The following is a list of methods, which can help initiate non-destructive repartitioning. Compress existing data The storage location of some data cannot be changed. This can prevent the resizing of a partition to the required size, and ultimately lead to a destructive repartition process. Compressing data in an already existing partition can help you resize your partitions as needed. It can also help to maximize the free space available. The following diagram is a simplified representation of this process. Figure 4.4. Data compression on a disk To avoid any possible data loss, create a backup before continuing with the compression process. Resize the existing partition By resizing an already existing partition, you can free up more space. Depending on your resizing software, the results may vary. In the majority of cases, you can create a new unformatted partition of the same type, as the original partition. The steps you take after resizing can depend on the software you use. In the following example, the best practice is to delete the new DOS (Disk Operating System) partition, and create a Linux partition instead. Verify what is most suitable for your disk before initiating the resizing process. Figure 4.5. Partition resizing on a disk Optional: Create new partitions Some pieces of resizing software support Linux based systems. In such cases, there is no need to delete the newly created partition after resizing. Creating a new partition afterwards depends on the software you use. The following diagram represents the disk state, before and after creating a new partition. Figure 4.6. Disk with final partition configuration
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/strategies-for-repartitioning-a-disk_managing-storage-devices
Chapter 5. Developer previews
Chapter 5. Developer previews This section describes the developer preview features introduced in Red Hat OpenShift Data Foundation 4.14. Important Developer preview feature is subject to Developer preview support limitations. Developer preview releases are not intended to be run in production environments. The clusters deployed with the developer preview features are considered to be development clusters and are not supported through the Red Hat Customer Portal case management system. If you need assistance with developer preview features, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. 5.1. Custom timeouts for the reclaim space operation OpenShift Data Foundation now allows you to set a custom timeout value for the reclaim space operation. Previously, depending on RBD volume size and its data pattern, the reclaim space operation might have failed with the error context deadline exceeded . Adjusting the timeout value avoids this error. For more information, see the knowledgebase article Customize timeouts for Reclaim Space Operation . 5.2. Expansion of encrypted RBD volumes OpenShift Data Foundation now enables resize capability on encrypted RBD persistent volume claims (PVCs). For more information, see the knowledgebase article Enabling resize for encrypted RBD PVC . 5.3. IPV6 support for external mode OpenShift Data Foundation now allows users to use the IPv6 Red Hat Ceph Storage external standalone Ceph cluster to connect with the IPV6 OpenShift Container Platform cluster. Users can pass the IPv6 endpoints using the same endpoint flags while running the python script. 5.4. Network File System supports export sharing across namespaces When OpenShift Data Foundation is used to dynamically create an NFS-export, the PersistentVolumeClaim is used to access the NFS-export in a pod. It is not immediately possible to use the same NFS-export for a different application in another OpenShift Namespace. You can now create a second PersistentVolume that can be bound to a second PersistentVolumeClaim in another OpenShift namespace. For more information, see the knowledgebase article ODF provisioned NFS/PersistentVolume sharing between Namespaces . 5.5. Data compression on the wire Data compression on the wire helps in multi-availability zones deployment by lowering latency and network costs. It also helps in cases where the network bandwidth is a bottleneck for performance. For more information, see the knowledgeabase article In-transit compression .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/4.14_release_notes/developer_previews
4.2.2. Displaying Physical Volumes
4.2.2. Displaying Physical Volumes There are three commands you can use to display properties of LVM physical volumes: pvs , pvdisplay , and pvscan . The pvs command provides physical volume information in a configurable form, displaying one line per physical volume. The pvs command provides a great deal of format control, and is useful for scripting. For information on using the pvs command to customize your output, see Section 4.9, "Customized Reporting for LVM" . The pvdisplay command provides a verbose multi-line output for each physical volume. It displays physical properties (size, extents, volume group, etc.) in a fixed format. The following example shows the output of the pvdisplay command for a single physical volume. The pvscan command scans all supported LVM block devices in the system for physical volumes. The following command shows all physical devices found: You can define a filter in the lvm.conf so that this command will avoid scanning specific physical volumes. For information on using filters to control which devices are scanned, see Section 4.6, "Controlling LVM Device Scans with Filters" .
[ "pvdisplay --- Physical volume --- PV Name /dev/sdc1 VG Name new_vg PV Size 17.14 GB / not usable 3.40 MB Allocatable yes PE Size (KByte) 4096 Total PE 4388 Free PE 4375 Allocated PE 13 PV UUID Joqlch-yWSj-kuEn-IdwM-01S9-XO8M-mcpsVe", "pvscan PV /dev/sdb2 VG vg0 lvm2 [964.00 MB / 0 free] PV /dev/sdc1 VG vg0 lvm2 [964.00 MB / 428.00 MB free] PV /dev/sdc2 lvm2 [964.84 MB] Total: 3 [2.83 GB] / in use: 2 [1.88 GB] / in no VG: 1 [964.84 MB]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/physvol_display
2.4. Download and Install JBoss Data Grid
2.4. Download and Install JBoss Data Grid Use the following steps to download and install Red Hat JBoss Data Grid: Download JBoss Data Grid from the Red Hat Customer Portal. Verify the downloaded files. Install JBoss Data Grid. Report a bug 2.4.1. Download Red Hat JBoss Data Grid Follow the listed steps to download Red Hat JBoss Data Grid from the Customer Portal: Procedure 2.2. Download JBoss Data Grid Log into the Customer Portal at https://access.redhat.com . Click the Downloads button near the top of the page. In the Product Downloads page, click Red Hat JBoss Data Grid . Select the appropriate JBoss Data Grid version from the Version: drop down menu. Download the appropriate files from the list that is displayed. Report a bug 2.4.2. About the Red Hat Customer Portal The Red Hat Customer Portal is the centralized platform for Red Hat knowledge and subscription resources. Use the Red Hat Customer Portal to do the following: Manage and maintain Red Hat entitlements and support contracts. Download officially-supported software. Access product documentation and the Red Hat Knowledgebase. Contact Global Support Services. File bugs against Red Hat products. The Customer Portal is available here: https://access.redhat.com . Report a bug 2.4.3. Checksum Validation Checksum validation is used to ensure a downloaded file has not been corrupted. Checksum validation employs algorithms that compute a fixed-size datum (or checksum) from an arbitrary block of digital data. If two parties compute a checksum of a particular file using the same algorithm, the results will be identical. Therefore, when computing the checksum of a downloaded file using the same algorithm as the supplier, if the checksums match, the integrity of the file is confirmed. If there is a discrepancy, the file has been corrupted in the download process. 23154%2C+Getting+Started+Guide-6.608-09-2016+09%3A22%3A31JBoss+Data+Grid+6Documentation6.6.1 Report a bug 2.4.4. Verify the Downloaded File Procedure 2.3. Verify the Downloaded File To verify that a file downloaded from the Red Hat Customer Portal is error-free, access the portal site and go to that package's Software Details page. The Software Details page displays the MD5 and SHA256 "checksum" values. Use the checksum values to check the integrity of the file. Open a terminal window and run either the md5sum or sha256sum command, with the downloaded file as an argument. The program displays the checksum value for the file as the output for the command. Compare the checksum value returned by the command to the corresponding value displayed on the Software Details page for the file. Note Microsoft Windows does not come equipped with a checksum tool. Windows operating system users have to download a third-party product instead. Result If the two checksum values are identical then the file has not been altered or corrupted and is, therefore, safe to use. If the two checksum values are not identical, then download the file again. A difference between the checksum values means that the file has either been corrupted during download or has been modified since it was uploaded to the server. If, after several downloads, the checksum will still not successfully validate, contact Red Hat Support for assistance. 23154%2C+Getting+Started+Guide-6.608-09-2016+09%3A22%3A31JBoss+Data+Grid+6Documentation6.6.1 Report a bug 2.4.5. Install Red Hat JBoss Data Grid Prerequisite Locate the appropriate version, platform, and file type and download Red Hat JBoss Data Grid from the Customer Portal. Procedure 2.4. Install JBoss Data Grid Copy the downloaded JBoss Data Grid package to the preferred location on your machine. Run the following command to extract the downloaded JBoss Data Grid package: Replace JDG_PACKAGE with the name of the JBoss Data Grid usage mode package downloaded from the Red Hat Customer Portal. The resulting unzipped directory will now be referred to as USDJDG_HOME . Report a bug 2.4.6. Red Hat Documentation Site Red Hat's official documentation site is available at https://access.redhat.com/site/documentation/ . There you will find the latest version of every book, including this one. 23154%2C+Getting+Started+Guide-6.608-09-2016+09%3A22%3A31JBoss+Data+Grid+6Documentation6.6.1 Report a bug
[ "unzip JDG_PACKAGE" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-download_and_install_jboss_data_grid
Chapter 41. Consumer Interface
Chapter 41. Consumer Interface Abstract This chapter describes how to implement the Consumer interface, which is an essential step in the implementation of a Apache Camel component. 41.1. The Consumer Interface Overview An instance of org.apache.camel.Consumer type represents a source endpoint in a route. There are several different ways of implementing a consumer (see Section 38.1.3, "Consumer Patterns and Threading" ), and this degree of flexibility is reflected in the inheritance hierarchy ( see Figure 41.1, "Consumer Inheritance Hierarchy" ), which includes several different base classes for implementing a consumer. Figure 41.1. Consumer Inheritance Hierarchy Consumer parameter injection For consumers that follow the scheduled poll pattern (see the section called "Scheduled poll pattern" ), Apache Camel provides support for injecting parameters into consumer instances. For example, consider the following endpoint URI for a component identified by the custom prefix: Apache Camel provides support for automatically injecting query options of the form consumer.\* . For the consumer.myConsumerParam parameter, you need to define corresponding setter and getter methods on the Consumer implementation class as follows: Where the getter and setter methods follow the usual Java bean conventions (including capitalizing the first letter of the property name). In addition to defining the bean methods in your Consumer implementation, you must also remember to call the configureConsumer() method in the implementation of Endpoint.createConsumer() (see the section called "Scheduled poll endpoint implementation" ). Example 41.1, "FileEndpoint createConsumer() Implementation" shows an example of a createConsumer() method implementation, taken from the FileEndpoint class in the file component: Example 41.1. FileEndpoint createConsumer() Implementation At run time, consumer parameter injection works as follows: When the endpoint is created, the default implementation of DefaultComponent.createEndpoint(String uri) parses the URI to extract the consumer parameters, and stores them in the endpoint instance by calling ScheduledPollEndpoint.configureProperties() . When createConsumer() is called, the method implementation calls configureConsumer() to inject the consumer parameters (see Example 41.1, "FileEndpoint createConsumer() Implementation" ). The configureConsumer() method uses Java reflection to call the setter methods whose names match the relevant options after the consumer. prefix has been stripped off. Scheduled poll parameters A consumer that follows the scheduled poll pattern automatically supports the consumer parameters shown in Table 41.1, "Scheduled Poll Parameters" (which can appear as query options in the endpoint URI). Table 41.1. Scheduled Poll Parameters Name Default Description initialDelay 1000 Delay, in milliseconds, before the first poll. delay 500 Depends on the value of the useFixedDelay flag (time unit is milliseconds). useFixedDelay false If false , the delay parameter is interpreted as the polling period. Polls will occur at initialDelay , initialDelay+delay , initialDelay+2\*delay , and so on. If true , the delay parameter is interpreted as the time elapsed between the execution and the execution. Polls will occur at initialDelay , initialDelay+[ ProcessingTime ]+delay , and so on. Where ProcessingTime is the time taken to process an exchange object in the current thread. Converting between event-driven and polling consumers Apache Camel provides two special consumer implementations which can be used to convert back and forth between an event-driven consumer and a polling consumer. The following conversion classes are provided: org.apache.camel.impl.EventDrivenPollingConsumer - Converts an event-driven consumer into a polling consumer instance. org.apache.camel.impl.DefaultScheduledPollConsumer - Converts a polling consumer into an event-driven consumer instance. In practice, these classes are used to simplify the task of implementing an Endpoint type. The Endpoint interface defines the following two methods for creating a consumer instance: createConsumer() returns an event-driven consumer and createPollingConsumer() returns a polling consumer. You would only implement one these methods. For example, if you are following the event-driven pattern for your consumer, you would implement the createConsumer() method to provide a method implementation for createPollingConsumer() that simply raises an exception. With the help of the conversion classes, however, Apache Camel is able to provide a more useful default implementation. For example, if you want to implement your consumer according to the event-driven pattern, you implement the endpoint by extending DefaultEndpoint and implementing the createConsumer() method. The implementation of createPollingConsumer() is inherited from DefaultEndpoint , where it is defined as follows: The EventDrivenPollingConsumer constructor takes a reference to the event-driven consumer, this , effectively wrapping it and converting it into a polling consumer. To implement the conversion, the EventDrivenPollingConsumer instance buffers incoming events and makes them available on demand through the receive() , the receive(long timeout) , and the receiveNoWait() methods. Analogously, if you are implementing your consumer according to the polling pattern, you implement the endpoint by extending DefaultPollingEndpoint and implementing the createPollingConsumer() method. In this case, the implementation of the createConsumer() method is inherited from DefaultPollingEndpoint , and the default implementation returns a DefaultScheduledPollConsumer instance (which converts the polling consumer into an event-driven consumer). ShutdownPrepared interface Consumer classes can optionally implement the org.apache.camel.spi.ShutdownPrepared interface, which enables your custom consumer endpoint to receive shutdown notifications. Example 41.2, "ShutdownPrepared Interface" shows the definition of the ShutdownPrepared interface. Example 41.2. ShutdownPrepared Interface The ShutdownPrepared interface defines the following methods: prepareShutdown Receives notifications to shut down the consumer endpoint in one or two phases, as follows: Graceful shutdown - where the forced argument has the value false . Attempt to clean up resources gracefully. For example, by stopping threads gracefully. Forced shutdown - where the forced argument has the value true . This means that the shutdown has timed out, so you must clean up resources more aggressively. This is the last chance to clean up resources before the process exits. ShutdownAware interface Consumer classes can optionally implement the org.apache.camel.spi.ShutdownAware interface, which interacts with the graceful shutdown mechanism, enabling a consumer to ask for extra time to shut down. This is typically needed for components such as SEDA, which can have pending exchanges stored in an internal queue. Normally, you would want to process all of the exchanges in the queue before shutting down the SEDA consumer. Example 41.3, "ShutdownAware Interface" shows the definition of the ShutdownAware interface. Example 41.3. ShutdownAware Interface The ShutdownAware interface defines the following methods: deferShutdown Return true from this method, if you want to delay shutdown of the consumer. The shutdownRunningTask argument is an enum which can take either of the following values: ShutdownRunningTask.CompleteCurrentTaskOnly - finish processing the exchanges that are currently being processed by the consumer's thread pool, but do not attempt to process any more exchanges than that. ShutdownRunningTask.CompleteAllTasks - process all of the pending exchanges. For example, in the case of the SEDA component, the consumer would process all of the exchanges from its incoming queue. getPendingExchangesSize Indicates how many exchanges remain to be processed by the consumer. A zero value indicates that processing is finished and the consumer can be shut down. For an example of how to define the ShutdownAware methods, see Example 41.7, "Custom Threading Implementation" . 41.2. Implementing the Consumer Interface Alternative ways of implementing a consumer You can implement a consumer in one of the following ways: Event-driven consumer implementation Scheduled poll consumer implementation Polling consumer implementation Custom threading implementation Event-driven consumer implementation In an event-driven consumer, processing is driven explicitly by external events. The events are received through an event-listener interface, where the listener interface is specific to the particular event source. Example 41.4, "JMXConsumer Implementation" shows the implementation of the JMXConsumer class, which is taken from the Apache Camel JMX component implementation. The JMXConsumer class is an example of an event-driven consumer, which is implemented by inheriting from the org.apache.camel.impl.DefaultConsumer class. In the case of the JMXConsumer example, events are represented by calls on the NotificationListener.handleNotification() method, which is a standard way of receiving JMX events. In order to receive these JMX events, it is necessary to implement the NotificationListener interface and override the handleNotification() method, as shown in Example 41.4, "JMXConsumer Implementation" . Example 41.4. JMXConsumer Implementation 1 The JMXConsumer pattern follows the usual pattern for event-driven consumers by extending the DefaultConsumer class. Additionally, because this consumer is designed to receive events from JMX (which are represented by JMX notifications), it is necessary to implement the NotificationListener interface. 2 You must implement at least one constructor that takes a reference to the parent endpoint, endpoint , and a reference to the processor in the chain, processor , as arguments. 3 The handleNotification() method (which is defined in NotificationListener ) is automatically invoked by JMX whenever a JMX notification arrives. The body of this method should contain the code that performs the consumer's event processing. Because the handleNotification() call originates from the JMX layer, the consumer's threading model is implicitly controlled by the JMX layer, not by the JMXConsumer class. 4 This line of code combines two steps. First, the JMX notification object is converted into an exchange object, which is the generic representation of an event in Apache Camel. Then the newly created exchange object is passed to the processor in the route (invoked synchronously). 5 The handleException() method is implemented by the DefaultConsumer base class. By default, it handles exceptions using the org.apache.camel.impl.LoggingExceptionHandler class. Note The handleNotification() method is specific to the JMX example. When implementing your own event-driven consumer, you must identify an analogous event listener method to implement in your custom consumer. Scheduled poll consumer implementation In a scheduled poll consumer, polling events are automatically generated by a timer class, java.util.concurrent.ScheduledExecutorService . To receive the generated polling events, you must implement the ScheduledPollConsumer.poll() method (see Section 38.1.3, "Consumer Patterns and Threading" ). Example 41.5, "ScheduledPollConsumer Implementation" shows how to implement a consumer that follows the scheduled poll pattern, which is implemented by extending the ScheduledPollConsumer class. Example 41.5. ScheduledPollConsumer Implementation 1 Implement a scheduled poll consumer class, CustomConsumer , by extending the org.apache.camel.impl.ScheduledPollConsumer class. 2 You must implement at least one constructor that takes a reference to the parent endpoint, endpoint , and a reference to the processor in the chain, processor , as arguments. 3 Override the poll() method to receive the scheduled polling events. This is where you should put the code that retrieves and processes incoming events (represented by exchange objects). 4 In this example, the event is processed synchronously. If you want to process events asynchronously, you should use a reference to an asynchronous processor instead, by calling getAsyncProcessor() . For details of how to process events asynchronously, see Section 38.1.4, "Asynchronous Processing" . 5 (Optional) If you want some lines of code to execute as the consumer is starting up, override the doStart() method as shown. 6 (Optional) If you want some lines of code to execute as the consumer is stopping, override the doStop() method as shown. Polling consumer implementation Example 41.6, "PollingConsumerSupport Implementation" outlines how to implement a consumer that follows the polling pattern, which is implemented by extending the PollingConsumerSupport class. Example 41.6. PollingConsumerSupport Implementation 1 Implement your polling consumer class, CustomConsumer , by extending the org.apache.camel.impl.PollingConsumerSupport class. 2 You must implement at least one constructor that takes a reference to the parent endpoint, endpoint , as an argument. A polling consumer does not need a reference to a processor instance. 3 The receiveNoWait() method should implement a non-blocking algorithm for retrieving an event (exchange object). If no event is available, it should return null . 4 The receive() method should implement a blocking algorithm for retrieving an event. This method can block indefinitely, if events remain unavailable. 5 The receive(long timeout) method implements an algorithm that can block for as long as the specified timeout (typically specified in units of milliseconds). 6 If you want to insert code that executes while a consumer is starting up or shutting down, implement the doStart() method and the doStop() method, respectively. Custom threading implementation If the standard consumer patterns are not suitable for your consumer implementation, you can implement the Consumer interface directly and write the threading code yourself. When writing the threading code, however, it is important that you comply with the standard Apache Camel threading model, as described in Section 2.8, "Threading Model" . For example, the SEDA component from camel-core implements its own consumer threading, which is consistent with the Apache Camel threading model. Example 41.7, "Custom Threading Implementation" shows an outline of how the SedaConsumer class implements its threading. Example 41.7. Custom Threading Implementation 1 The SedaConsumer class is implemented by extending the org.apache.camel.impl.ServiceSupport class and implementing the Consumer , Runnable , and ShutdownAware interfaces. 2 Implement the Runnable.run() method to define what the consumer does while it is running in a thread. In this case, the consumer runs in a loop, polling the queue for new exchanges and then processing the exchanges in the latter part of the queue. 3 The doStart() method is inherited from ServiceSupport . You override this method in order to define what the consumer does when it starts up. 4 Instead of creating threads directly, you should create a thread pool using the ExecutorServiceStrategy object that is registered with the CamelContext . This is important, because it enables Apache Camel to implement centralized management of threads and support such features as graceful shutdown. For details, see Section 2.8, "Threading Model" . 5 Kick off the threads by calling the ExecutorService.execute() method poolSize times. 6 The doStop() method is inherited from ServiceSupport . You override this method in order to define what the consumer does when it shuts down. 7 Shut down the thread pool, which is represented by the executor instance.
[ "custom:destination?consumer.myConsumerParam", "public class CustomConsumer extends ScheduledPollConsumer { String getMyConsumerParam() { ... } void setMyConsumerParam(String s) { ... } }", "public class FileEndpoint extends ScheduledPollEndpoint { public Consumer createConsumer(Processor processor) throws Exception { Consumer result = new FileConsumer(this, processor); configureConsumer(result); return result; } }", "package org.apache.camel; public interface Endpoint { Consumer createConsumer(Processor processor) throws Exception; PollingConsumer createPollingConsumer() throws Exception; }", "public PollingConsumer<E> createPollingConsumer() throws Exception { return new EventDrivenPollingConsumer<E>(this); }", "package org.apache.camel.spi; public interface ShutdownPrepared { void prepareShutdown(boolean forced); }", "// Java package org.apache.camel.spi; import org.apache.camel.ShutdownRunningTask; public interface ShutdownAware extends ShutdownPrepared { boolean deferShutdown(ShutdownRunningTask shutdownRunningTask); int getPendingExchangesSize(); }", "package org.apache.camel.component.jmx; import javax.management.Notification; import javax.management.NotificationListener; import org.apache.camel.Processor; import org.apache.camel.impl.DefaultConsumer; public class JMXConsumer extends DefaultConsumer implements NotificationListener { 1 JMXEndpoint jmxEndpoint; public JMXConsumer(JMXEndpoint endpoint, Processor processor) { 2 super(endpoint, processor); this.jmxEndpoint = endpoint; } public void handleNotification(Notification notification, Object handback) { 3 try { getProcessor().process(jmxEndpoint.createExchange(notification)); 4 } catch (Throwable e) { handleException(e); 5 } } }", "import java.util.concurrent.ScheduledExecutorService; import org.apache.camel.Consumer; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Message; import org.apache.camel.PollingConsumer; import org.apache.camel.Processor; import org.apache.camel.impl.ScheduledPollConsumer; public class pass:quotes[ CustomConsumer ] extends ScheduledPollConsumer { 1 private final pass:quotes[ CustomEndpoint ] endpoint; public pass:quotes[ CustomConsumer ](pass:quotes[ CustomEndpoint ] endpoint, Processor processor) { 2 super(endpoint, processor); this.endpoint = endpoint; } protected void poll() throws Exception { 3 Exchange exchange = /* Receive exchange object ... */; // Example of a synchronous processor. getProcessor().process(exchange); 4 } @Override protected void doStart() throws Exception { 5 // Pre-Start: // Place code here to execute just before start of processing. super.doStart(); // Post-Start: // Place code here to execute just after start of processing. } @Override protected void doStop() throws Exception { 6 // Pre-Stop: // Place code here to execute just before processing stops. super.doStop(); // Post-Stop: // Place code here to execute just after processing stops. } }", "import org.apache.camel.Exchange; import org.apache.camel.RuntimeCamelException; import org.apache.camel.impl.PollingConsumerSupport; public class pass:quotes[ CustomConsumer ] extends PollingConsumerSupport { 1 private final pass:quotes[ CustomEndpoint ] endpoint; public pass:quotes[ CustomConsumer ](pass:quotes[ CustomEndpoint ] endpoint) { 2 super(endpoint); this.endpoint = endpoint; } public Exchange receiveNoWait() { 3 Exchange exchange = /* Obtain an exchange object. */; // Further processing return exchange; } public Exchange receive() { 4 // Blocking poll } public Exchange receive(long timeout) { 5 // Poll with timeout } protected void doStart() throws Exception { 6 // Code to execute whilst starting up. } protected void doStop() throws Exception { // Code to execute whilst shutting down. } }", "package org.apache.camel.component.seda; import java.util.ArrayList; import java.util.List; import java.util.concurrent.BlockingQueue; import java.util.concurrent.ExecutorService; import java.util.concurrent.TimeUnit; import org.apache.camel.Consumer; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.ShutdownRunningTask; import org.apache.camel.impl.LoggingExceptionHandler; import org.apache.camel.impl.ServiceSupport; import org.apache.camel.util.ServiceHelper; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; /** * A Consumer for the SEDA component. * * @version USDRevision: 922485 USD */ public class SedaConsumer extends ServiceSupport implements Consumer, Runnable, ShutdownAware { 1 private static final transient Log LOG = LogFactory.getLog(SedaConsumer.class); private SedaEndpoint endpoint; private Processor processor; private ExecutorService executor; public SedaConsumer(SedaEndpoint endpoint, Processor processor) { this.endpoint = endpoint; this.processor = processor; } public void run() { 2 BlockingQueue<Exchange> queue = endpoint.getQueue(); // Poll the queue and process exchanges } protected void doStart() throws Exception { 3 int poolSize = endpoint.getConcurrentConsumers(); executor = endpoint.getCamelContext().getExecutorServiceStrategy() .newFixedThreadPool(this, endpoint.getEndpointUri(), poolSize); 4 for (int i = 0; i < poolSize; i++) { 5 executor.execute(this); } endpoint.onStarted(this); } protected void doStop() throws Exception { 6 endpoint.onStopped(this); // must shutdown executor on stop to avoid overhead of having them running endpoint.getCamelContext().getExecutorServiceStrategy().shutdownNow(executor); 7 if (multicast != null) { ServiceHelper.stopServices(multicast); } } //---------- // Implementation of ShutdownAware interface public boolean deferShutdown(ShutdownRunningTask shutdownRunningTask) { // deny stopping on shutdown as we want seda consumers to run in case some other queues // depend on this consumer to run, so it can complete its exchanges return true; } public int getPendingExchangesSize() { // number of pending messages on the queue return endpoint.getQueue().size(); } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/ConsumerIntf
13.2.18. Domain Options: Using DNS Service Discovery
13.2.18. Domain Options: Using DNS Service Discovery DNS service discovery, defined in RFC 2782 , allows applications to check the SRV records in a given domain for certain services of a certain type; it then returns any servers discovered of that type. With SSSD, the identity and authentication providers can either be explicitly defined (by IP address or host name) or they can be discovered dynamically, using service discovery. If no provider server is listed - for example, if id_provider = ldap is set without a corresponding ldap_uri parameter - then discovery is automatically used. The DNS discovery query has this format: For example, a scan for an LDAP server using TCP in the example.com domain looks like this: Note For every service with which to use service discovery, add a special DNS record to the DNS server: For SSSD, the service type is LDAP by default, and almost all services use TCP (except for Kerberos, which starts with UDP). For service discovery to be enabled, the only thing that is required is the domain name. The default is to use the domain portion of the machine host name, but another domain can be specified (using the dns_discovery_domain parameter). So, by default, no additional configuration needs to be made for service discovery - with one exception. The password change provider has server discovery disabled by default, and it must be explicitly enabled by setting a service type. While no configuration is necessary, it is possible for server discovery to be customized by using a different DNS domain ( dns_discovery_domain ) or by setting a different service type to scan for. For example: Lastly, service discovery is never used with backup servers; it is only used for the primary server for a provider. What this means is that discovery can be used initially to locate a server, and then SSSD can fall back to using a backup server. To use discovery for the primary server, use _srv_ as the primary server value, and then list the backup servers. For example: Note Service discovery cannot be used with backup servers, only primary servers. If a DNS lookup fails to return an IPv4 address for a host name, SSSD attempts to look up an IPv6 address before returning a failure. This only ensures that the asynchronous resolver identifies the correct address. The host name resolution behavior is configured in the lookup family order option in the sssd.conf configuration file.
[ "_ service ._ protocol.domain", "_ldap._tcp.example.com", "_service._protocol._domain TTL priority weight port hostname", "[domain/EXAMPLE] chpass_provider = ldap ldap_chpass_dns_service_name = ldap", "[domain/EXAMPLE] id _provider = ldap dns_discovery_domain = corp.example.com ldap_dns_service_name = ldap chpass_provider = krb5 ldap_chpass_dns_service_name = kerberos", "[domain/EXAMPLE] id _provider = ldap ldap_uri = _srv_ ldap_backup_uri = ldap://ldap2.example.com auth_provider = krb5 krb5_server = _srv_ krb5_backup_server = kdc2.example.com chpass_provider = krb5 ldap_chpass_dns_service_name = kerberos ldap_chpass_uri = _srv_ ldap_chpass_backup_uri = kdc2.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sssd-service-discovery
22.3. GNOME Boxes
22.3. GNOME Boxes Boxes is a lightweight graphical desktop virtualization tool used to view and access virtual machines and remote systems. Unlike virt-viewer and remote-viewer , Boxes allows viewing guest virtual machines, but also creating and configuring them, similar to virt-manager . However, in comparison with virt-manager , Boxes offers fewer management options and features, but is easier to use. To install Boxes , run: Open Boxes through Applications ⇒ System Tools . The main screen shows the available guest virtual machines. The right side of the screen has two buttons: - the search button, to search for guest virtual machines by name, and - the selection button. Clicking the selection button allows you to select one or more guest virtual machines in order to perform operations individually or as a group. The available operations are shown at the bottom of the screen on the operations bar: Figure 22.3. The Operations Bar There are four operations that can be performed: Favorite : Adds a heart to selected guest virtual machines and moves them to the top of the list of guests. This becomes increasingly helpful as the number of guests grows. Pause : The selected guest virtual machines will stop running. Delete : Removes selected guest virtual machines. Properties : Shows the properties of the selected guest virtual machine. Create new guest virtual machines using the New button on the left side of the main screen. Procedure 22.1. Creating a new guest virtual machine with Boxes Click New This opens the Introduction screen. Click Continue . Figure 22.4. Introduction screen Select source The Source Selection screen has three options: Available media: Any immediately available installation media will be shown here. Clicking any of these will take you directly to the Review screen. Enter a URL : Type in a URL to specify a local URI or path to an ISO file. This can also be used to access a remote machine. The address should follow the pattern of protocol :// IPaddress ? port ; , for example: The protocols can be spice:// , qemu:// , or vnc:// Select a file : Open a file directory to search for installation media manually. Figure 22.5. Source Selection screen Review the details The Review screen shows the details of the guest virtual machine. Figure 22.6. Review screen These details can be left as is, in which case proceed to the final step, or: Optional: customize the details clicking Customize allows you to adjust the configuration of the guest virtual machine, such as the memory and disk size. Figure 22.7. Customization screen Create Click Create . The new guest virtual machine will open.
[ "yum install gnome-boxes", "spice://192.168.122.1?port=5906;" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-graphic_user_interface_tools_for_guest_virtual_machine_management-gnome_boxes
Chapter 23. Identifying virtual devices with tags
Chapter 23. Identifying virtual devices with tags 23.1. Tagging virtual devices In Red Hat OpenStack Platform, if you launch a VM instance with multiple network interfaces or block devices, you can use device tagging to communicate the intended role of each device to the instance operating system. Tags are assigned to devices at instance boot time, and are available to the instance operating system through the metadata API and the configuration drive (if enabled). Procedure To tag virtual devices, use the tag parameters, --block-device and --nic , when creating instances. Example The resulting tags are added to the existing instance metadata and are available through both the metadata API, and on the configuration drive. In this example, the following devices section populates the metadata: Sample contents of the meta_data.json file: The device tag metadata is available using GET /openstack/latest/meta_data.json from the metadata API. If the configuration drive is enabled, and mounted under /configdrive in the instance operating system, the metadata is also present in /configdrive/openstack/latest/meta_data.json .
[ "nova boot test-vm --flavor m1.tiny --image cirros --nic net-id=55411ca3-83dd-4036-9158-bf4a6b8fb5ce,tag=nfv1 --block-device id=b8c9bef7-aa1d-4bf4-a14d-17674b370e13,bus=virtio,tag=database-server NFVappServer", "{ \"devices\": [ { \"type\": \"nic\", \"bus\": \"pci\", \"address\": \"0030:00:02.0\", \"mac\": \"aa:00:00:00:01\", \"tags\": [\"nfv1\"] }, { \"type\": \"disk\", \"bus\": \"pci\", \"address\": \"0030:00:07.0\", \"serial\": \"disk-vol-227\", \"tags\": [\"database-server\"] } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/identify-vdevices-tags_rhosp-network
Chapter 37. Red Hat Enterprise Linux Atomic Host 7.2.7
Chapter 37. Red Hat Enterprise Linux Atomic Host 7.2.7 This release doesn't include any updated images and the latest version of Atomic Host cloud images remains at 7.2.6-1. The latest "Red Hat Atomic Host Installer" ISO image remains at 7.2.3-1 as well. OSTree has been updated and new deployments can be created with any of those images and updated to the latest release by running the atomic host upgrade command. 37.1. Atomic Host OStree update : New Tree Version: 7.2.7 (hash: dae35767902aad07b087d359be20f234d244da79fdd4734cd2fbc3ee39b12cf8) Changes since Tree Version 7.2.6 (hash: 347c3f5eb641e69fc602878c646cf42c4bcd5d9f36847a1f24ff8f3ec80f17b1) Updated packages : selinux-policy-3.13.1-63.atomic.el7.7 37.2. Extras Updated packages : docker-1.10.3-46.el7.14 docker-latest-1.12.1-2.el7 etcd-2.3.7-4.el7 oci-register-machine-0-1.8.gitaf6c129.el7 37.2.1. Container Images Updated : Red Hat Enterprise Linux Container Image (rhel7/rhel) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic Kubernetes-controller Container Image (rhel7/kubernetes-controller-mgr) Red Hat Enterprise Linux Atomic Kubernetes-apiserver Container Image (rhel7/kubernetes-apiserver) Red Hat Enterprise Linux Atomic Kubernetes-scheduler Container Image (rhel7/kubernetes-scheduler) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) (Technology Preview) Red Hat Enterprise Linux Atomic openscap Container Image (rhel7/openscap) (Technology Preview) 37.3. New Features docker-latest has been upgraded to version 1.12.1 The docker-latest packages are now version 1.12.1. The following article has been updated to reflect the changes Introducing docker-latest for RHEL 7 and RHEL Atomic Host . docker 1.12 uses runc as a runtime environment Since docker version 1.11, runc is used instead of libcontainer for container runtime. The docker-latest packages contain 1.12, and runc can be found in /usr/libexec/docker/docker-runc . However, docker-runc is for internal use only by docker. If you want to use the runc command, you still need the runc package installed on your system. For RHEL Atomic Host, it is part of the OSTree by default, and for Red Hat Enterprise Linux 7, it is available as a separate package. Important Red Hat does not support modifying which runc binary is used by docker. docker swarm is now available As of 1.12 release, the upstream Docker project has embedded Docker Swarm in the docker binary. To avoid any unintended bugs, Red Hat has chosen to include Swarm as an unsupported add-on. For container orchestration, Red Hat recommends OpenShift and Kubernetes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_2_7
Chapter 20. Internationalization
Chapter 20. Internationalization 20.1. Red Hat Enterprise Linux 7 International Languages Red Hat Enterprise Linux 7 supports the installation of multiple languages and the changing of languages based on your requirements. The following languages are supported in Red Hat Enterprise Linux 7: East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese; European Languages - English, German, Spanish, French, Italian, Portuguese Brazilian, and Russian. Indic Languages - Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, and Telugu. The table below summarizes the currently supported languages, their locales, default fonts installed, and packages required for some of the supported languages. For more information on font configuration, see Desktop Migration and Administration Guide . Table 20.1. Language Support Matrix Territory Language Locale Default Font (Font Package) Input Methods Brazil Portuguese pt_BR.UTF-8 DejaVu Sans (dejavu-sans-fonts) France French fr_FR.UTF-8 DejaVu Sans (dejavu-sans-fonts) Germany German de_DE.UTF-8 DejaVu Sans (dejavu-sans-fonts) Italy Italian it_IT.UTF-8 DejaVu Sans (dejavu-sans-fonts) Russia Russian ru_RU.UTF-8 DejaVu Sans (dejavu-sans-fonts) Spain Spanish es_ES.UTF-8 DejaVu Sans (dejavu-sans-fonts) USA English en_US.UTF-8 DejaVu Sans (dejavu-sans-fonts) China Simplified Chinese zh_CN.UTF-8 WenQuanYi Zen Hei Sharp (wqy-zenhei-fonts) ibus-libpinyin, ibus-table-chinese Japan Japanese ja_JP.UTF-8 VL PGothic (vlgothic-p-fonts) ibus-kkc Korea Korean ko_KR.UTF-8 NanumGothic (nhn-nanum-gothic-fonts) ibus-hangul Taiwan Traditional Chinese zh_TW.UTF-8 AR PL UMing TW (cjkuni-uming-fonts) ibus-chewing, ibus-table-chinese India Assamese as_IN.UTF-8 Lohit Assamese (lohit-assamese-fonts) ibus-m17n, m17n-db, m17n-contrib Bengali bn_IN.UTF-8 Lohit Bengali (lohit-bengali-fonts) ibus-m17n, m17n-db, m17n-contrib Gujarati gu_IN.UTF-8 Lohit Gujarati (lohit-gujarati-fonts) ibus-m17n, m17n-db, m17n-contrib Hindi hi_IN.UTF-8 Lohit Hindi (lohit-devanagari-fonts) ibus-m17n, m17n-db, m17n-contrib Kannada kn_IN.UTF-8 Lohit Kannada (lohit-kannada-fonts) ibus-m17n, m17n-db, m17n-contrib Malayalam ml_IN.UTF-8 Meera (smc-meera-fonts) ibus-m17n, m17n-db, m17n-contrib Marathi mr_IN.UTF-8 Lohit Marathi (lohit-marathi-fonts) ibus-m17n, m17n-db, m17n-contrib Odia or_IN.UTF-8 Lohit Oriya (lohit-oriya-fonts) ibus-m17n, m17n-db, m17n-contrib Punjabi pa_IN.UTF-8 Lohit Punjabi (lohit-punjabi-fonts) ibus-m17n, m17n-db, m17n-contrib Tamil ta_IN.UTF-8 Lohit Tamil (lohit-tamil-fonts) ibus-m17n, m17n-db, m17n-contrib Telugu te_IN.UTF-8 Lohit Telugu (lohit-telugu-fonts) ibus-m17n, m17n-db, m17n-contrib
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-internationalization
Chapter 7. Configuring Soft-iWARP
Chapter 7. Configuring Soft-iWARP Remote Direct Memory Access (RDMA) uses several libraries and protocols over an Ethernet such as iWARP, Soft-iWARP for performance improvement and aided programming interface. Important Soft-iWARP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 7.1. Overview of iWARP and Soft-iWARP Remote direct memory access (RDMA) uses the iWARP over Ethernet for converged and low latency data transmission over TCP. By using standard Ethernet switches and the TCP/IP stack, iWARP routes traffic across the IP subnets to utilize the existing infrastructure efficiently. In Red Hat Enterprise Linux, multiple providers implement iWARP for their hardware network interface cards. For example, cxgb4 , irdma , qedr , and so on. Soft-iWARP (siw) is a software-based iWARP kernel driver and user library for Linux. It is a software-based RDMA device that provides a programming interface to RDMA hardware when attached to network interface cards. It provides an easy way to test and validate the RDMA environment. 7.2. Configuring Soft-iWARP Soft-iWARP (siw) implements the iWARP Remote direct memory access (RDMA) transport over the Linux TCP/IP network stack. It enables a system with a standard Ethernet adapter to interoperate with an iWARP adapter or with another system running the Soft-iWARP driver or a host with the hardware that supports iWARP. Important The Soft-iWARP feature is provided as a Technology Preview only. Technology Preview features are not supported with Red Hat production Service Level Agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These previews provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. To configure Soft-iWARP, you can use this procedure in a script to run automatically when the system boots. Prerequisites An Ethernet adapter is installed Procedure Install the iproute , libibverbs , libibverbs-utils , and infiniband-diags packages: Display the RDMA links: Load the siw kernel module: Add a new siw device named siw0 that uses the enp0s1 interface: Verification View the state of all RDMA links: List the available RDMA devices: You can use the ibv_devinfo utility to display a detailed status:
[ "yum install iproute libibverbs libibverbs-utils infiniband-diags", "rdma link show", "modprobe siw", "rdma link add siw0 type siw netdev enp0s1", "rdma link show link siw0/1 state ACTIVE physical_state LINK_UP netdev enp0s1", "ibv_devices device node GUID ------ ---------------- siw0 0250b6fffea19d61", "ibv_devinfo siw0 hca_id: siw0 transport: iWARP (1) fw_ver: 0.0.0 node_guid: 0250:b6ff:fea1:9d61 sys_image_guid: 0250:b6ff:fea1:9d61 vendor_id: 0x626d74 vendor_part_id: 1 hw_ver: 0x0 phys_port_cnt: 1 port: 1 state: PORT_ACTIVE (4) max_mtu: 1024 (3) active_mtu: 1024 (3) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_infiniband_and_rdma_networks/configuring-soft-iwarp_configuring-infiniband-and-rdma-networks
Chapter 9. Enabling FIPS mode with RHEL image builder
Chapter 9. Enabling FIPS mode with RHEL image builder You can create a customized image and boot a FIPS-enabled RHEL image. Before you compose the image, you must change the value of the fips directive in your blueprint. Prerequisites You are logged in as the root user or a user who is a member of the weldr group. Procedure Create a plain text file in the Tom's Obvious, Minimal Language (TOML) format with the following content: Import the blueprint to the RHEL image builder server: List the existing blueprints to check whether the created blueprint is successfully imported and exists: Check whether the components and versions listed in the blueprint and their dependencies are valid: Build the customized RHEL image: Review the image status: Download the image: RHEL image builder downloads the image to the current directory path. The UUID number and the image size are displayed alongside: Verification Log in to the system image with the username and password that you configured in your blueprint. Check if FIPS mode is enabled:
[ "name = \"system-fips-mode-enabled\" description = \"blueprint with FIPS enabled \" version = \"0.0.1\" [customizations] fips = true [[customizations.user]] name = \"admin\" password = \"admin\" groups = [\"users\", \"wheel\"]", "composer-cli blueprints push <blueprint-name> .toml", "composer-cli blueprints show <blueprint-name>", "composer-cli blueprints depsolve <blueprint-name>", "composer-cli compose start \\ <blueprint-name> \\ <image-type> \\", "composer-cli compose status ... <UUID> FINISHED <date> <blueprint-name> <blueprint-version> <image-type> ...", "composer-cli compose image <UUID>", "<UUID-image-name.type> : <size> MB", "fips-mode-setup --check FIPS mode is enabled." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/enabling-fips-mode-with-rhel-image-builder_composing-a-customized-rhel-system-image
Preface
Preface Curate collections developed in your organization using namespaces in automation hub.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/curating_collections_using_namespaces_in_automation_hub/pr01
Understanding OpenShift GitOps
Understanding OpenShift GitOps Red Hat OpenShift GitOps 1.15 Introduction to OpenShift GitOps Red Hat OpenShift Documentation Team
[ "oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v1.10.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v1.10.0", "oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:<image_version_tag> 1", "oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v1.10.0", "tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210399 1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html-single/understanding_openshift_gitops/index
Chapter 2. Installing Datadog for Ceph integration
Chapter 2. Installing Datadog for Ceph integration After installing the Datadog agent, configure the Datadog agent to report Ceph metrics to Datadog. Prerequisites Root-level access to the Ceph monitor node. Appropriate Ceph key providing access to the Red Hat Ceph Storage cluster. Internet access. Procedure Install the Ceph integration. Log in to the Datadog App . The user interface will present navigation on the left side of the screen. Click Integrations . Either enter ceph into the search field or scroll to find the Ceph integration. The user interface will present whether the Ceph integration is available or already installed . If it is available , click the button to install it. Configuring the Datadog agent for Ceph Navigate to the Datadog Agent configuration directory: Create a ceph.yaml file from the ceph.yml.sample file: Modify the ceph.yaml file: Example The following is a sample of what the modified ceph.yaml file looks like. Uncomment the -tags , -name , ceph_command , ceph_cluster , and use_sudo: True lines. The default values for ceph_command and ceph_cluster are /usr/bin/ceph and ceph respectively. When complete, it will look like this: Modify the sudoers file: Add the following line: Enable the Datadog agent so that it will restart if the Ceph host reboots: Restart the Datadog agent:
[ "cd /etc/dd-agent/conf.d", "cp ceph.yaml.example ceph.yaml", "vim ceph.yaml", "init_config: instances: - tags: - name:mars_cluster # ceph_cmd: /usr/bin/ceph ceph_cluster: ceph # If your environment requires sudo, please add a line like: dd-agent ALL=(ALL) NOPASSWD:/usr/bin/ceph to your sudoers file, and uncomment the below option. # use_sudo: True", "init_config: instances: - tags: - name:ceph-RHEL # ceph_cmd: /usr/bin/ceph ceph_cluster: ceph # If your environment requires sudo, please add a line like: dd-agent ALL=(ALL) NOPASSWD:/usr/bin/ceph to your sudoers file, and uncomment the below option. # use_sudo: True", "visudo", "dd-agent ALL=(ALL) NOPASSWD:/usr/bin/ceph", "systemctl enable datadog-agent", "systemctl status datadog-agent" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/monitoring_ceph_with_datadog_guide/installing-datadog-for-ceph-integration_datadog
Chapter 6. Important links
Chapter 6. Important links Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2021-05-07 10:16:41 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/amq_clients_overview/important_links
System Administrator's Guide
System Administrator's Guide Red Hat Enterprise Linux 7 Deployment, configuration, and administration of RHEL 7 Abstract The System Administrator's Guide documents relevant information regarding the deployment, configuration, and administration of Red Hat Enterprise Linux 7. It is oriented towards system administrators with a basic understanding of the system. Note To expand your expertise, you might also be interested in the Red Hat System Administration I (RH124) , Red Hat System Administration II (RH134) , Red Hat System Administration III (RH254) , or RHCSA Rapid Track (RH199) training courses. If you want to use Red Hat Enterprise Linux 7 with the Linux Containers functionality, see Product Documentation for Red Hat Enterprise Linux Atomic Host . For an overview of general Linux Containers concept and their current capabilities implemented in Red Hat Enterprise Linux 7, see Overview of Containers in Red Hat Systems . The topics related to containers management and administration are described in the Red Hat Enterprise Linux Atomic Host 7 Managing Containers guide.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/index
7.127. mgetty
7.127. mgetty 7.127.1. RHBA-2015:0711 - mgetty bug fix update Updated mgetty packages that fix one bug are now available for Red Hat Enterprise Linux 6. The mgetty packages contain a modem getty utility that allows logins over a serial line, for example using a modem. If you are using a Class 2 or Class 2.0 modem, mgetty can receive faxes. The mgetty-sendfax package is required to send faxes. Bug Fix BZ# 729003 Missing files with debug information have been added to the mgetty-debuginfo packages for seven binary files shipped in the mgetty package. Users of mgetty are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-mgetty
Chapter 9. Advanced migration options
Chapter 9. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. 9.1. Terminology Table 9.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 9.2. Migrating applications by using the command line You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration. 9.2.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure internal registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 9.2.2. Creating a registry route for direct image migration For direct image migration, you must create a route to the exposed internal registry on all remote clusters. Prerequisites The internal registry must be exposed to external traffic on all remote clusters. The OpenShift Container Platform 4 registry is exposed by default. Procedure To create a route to an OpenShift Container Platform 4 registry, run the following command: USD oc create route passthrough --service=image-registry -n openshift-image-registry 9.2.3. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.9, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 9.2.3.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 9.2.3.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 9.2.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 9.2.3.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 9.2.3.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 9.2.3.2.1. NetworkPolicy configuration 9.2.3.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 9.2.3.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 9.2.3.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 9.2.3.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 9.2.3.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 9.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 9.2.3.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration 9.2.4. Migrating an application by using the MTC API You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API. Procedure Create a MigCluster CR manifest for the host cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF Create a Secret object manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF 1 Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command: USD oc sa get-token migration-controller -n openshift-migration | base64 -w 0 Create a MigCluster CR manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF 1 Specify the Cluster CR of the remote cluster. 2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false . CA certificates are not required or checked if true . 4 Specify the Secret object of the remote cluster. 5 Specify the URL of the remote cluster. Verify that all clusters are in a Ready state: USD oc describe cluster <cluster> Create a Secret object manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF 1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key: USD echo -n "<key>" | base64 -w 0 1 1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a MigStorage CR manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF 1 Specify the bucket name. 2 Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the MigStorage CR is in a Ready state: USD oc describe migstorage <migstorage> Create a MigPlan CR manifest: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF 1 Direct image migration is enabled if false . 2 Direct volume migration is enabled if false . 3 Specify the name of the MigStorage CR instance. 4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster instance. Verify that the MigPlan instance is in a Ready state: USD oc describe migplan <migplan> -n openshift-migration Create a MigMigration CR manifest to start the migration defined in the MigPlan instance: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF 1 Specify the MigPlan CR name. 2 The pods on the source cluster are stopped before migration if true . 3 A stage migration, which copies most of the data without stopping the application, is performed if true . 4 A completed migration is rolled back if true . Verify the migration by watching the MigMigration CR progress: USD oc watch migmigration <migmigration> -n openshift-migration The output resembles the following: Example output Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47 9.2.5. State migration You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application's state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster. State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC. If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC. You can perform a state migration between clusters or within the same cluster. Important State migration migrates only the components that constitute an application's state. If you want to migrate an entire namespace, use stage or cutover migration. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Additional resources See Excluding PVCs from migration to select PVCs for state migration. See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster. See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application's state. 9.3. Migration hooks You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration. A migration hook runs on a source or a target cluster at one of the following migration steps: PreBackup : Before resources are backed up on the source cluster. PostBackup : After resources are backed up on the source cluster. PreRestore : Before resources are restored on the target cluster. PostRestore : After resources are restored on the target cluster. You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container. Ansible playbook The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed. The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.7 . This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. Custom hook container You can use a custom hook container instead of the default Ansible image. 9.3.1. Writing an Ansible playbook for a migration hook You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest. The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster. 9.3.1.1. Ansible modules You can use the Ansible shell module to run oc commands. Example shell module - hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces You can use kubernetes.core modules, such as k8s_info , to interact with Kubernetes resources. Example k8s_facts module - hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: "{{ lookup( 'env', 'HOSTNAME') }}" register: pods - name: Print pod name debug: msg: "{{ pods.resources[0].metadata.name }}" You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container. Example fail module - hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: "fail" fail: msg: "Cause a failure" when: do_fail 9.3.1.2. Environment variables The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plugin. Example environment variables - hosts: localhost gather_facts: false tasks: - set_fact: namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}" - debug: msg: "{{ item }}" with_items: "{{ namespaces }}" - debug: msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}" 9.4. Migration plan options You can exclude, edit, and map components in the MigPlan custom resource (CR). 9.4.1. Excluding resources You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool. By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time. Procedure Edit the MigrationController custom resource manifest: USD oc edit migrationcontroller <migration_controller> -n openshift-migration Update the spec section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add the additional_excluded_resources parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2 ... 1 Add disable_image_migration: true to exclude image streams from the migration. imagestreams is added to the excluded_resources list in main.yml when the MigrationController pod restarts. 2 Add disable_pv_migration: true to exclude PVs from the migration plan. persistentvolumes and persistentvolumeclaims are added to the excluded_resources list in main.yml when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. 3 You can add OpenShift Container Platform resources that you want to exclude to the additional_excluded_resources list. Wait two minutes for the MigrationController pod to restart so that the changes are applied. Verify that the resource is excluded: USD oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1 The output contains the excluded resources: Example output name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims 9.4.2. Mapping namespaces If you map namespaces in the MigPlan custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration. Two source namespaces mapped to the same destination namespace spec: namespaces: - namespace_2 - namespace_1:namespace_2 If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name. Incorrect namespace mapping spec: namespaces: - namespace_1:namespace_1 Correct namespace reference spec: namespaces: - namespace_1 9.4.3. Excluding persistent volume claims You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action parameter of the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set it to skip : apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip 9.4.4. Mapping persistent volume claims You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs. You map PVCs by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the PVs have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1 1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration. 9.4.5. Editing persistent volume attributes After you create a MigPlan custom resource (CR), the MigrationController CR discovers the persistent volumes (PVs). The spec.persistentVolumes block and the status.destStorageClasses block are added to the MigPlan CR. You can edit the values in the spec.persistentVolumes.selection block. If you change values outside the spec.persistentVolumes.selection block, the values are overwritten when the MigPlan CR is reconciled by the MigrationController CR. Note The default value for the spec.persistentVolumes.selection.storageClass parameter is determined by the following logic: If the source cluster PV is Gluster or NFS, the default is either cephfs , for accessMode: ReadWriteMany , or cephrbd , for accessMode: ReadWriteOnce . If the PV is neither Gluster nor NFS or if cephfs or cephrbd are not available, the default is a storage class for the same provisioner. If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster. You can change the storageClass value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If the storageClass value is empty, the PV will have no storage class after migration. This option is appropriate if, for example, you want to move the PV to an NFS volume on the destination cluster. Prerequisites MigPlan CR is in a Ready state. Procedure Edit the spec.persistentVolumes.selection values in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs 1 Allowed values are move , copy , and skip . If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value is copy . 2 Allowed values are snapshot and filesystem . Default value is filesystem . 3 The verify parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it to false . 4 You can change the default value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If no value is specified, the PV will have no storage class after migration. 5 Allowed values are ReadWriteOnce and ReadWriteMany . If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in the MigPlan CR. You cannot edit it by using the MTC web console. 9.4.6. Converting storage classes in the MTC web console You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on the cluster on which MTC is running. You must add the cluster to the MTC web console. Procedure In the left-side navigation pane of the OpenShift Container Platform web console, click Projects . In the list of projects, click your project. The Project details page opens. Click the DeploymentConfig name. Note the name of its running pod. Open the YAML tab of the project. Find the PVs and note the names of their corresponding persistent volume claims (PVCs). In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must contain 3 to 63 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). From the Migration type menu, select Storage class conversion . From the Source cluster list, select the desired cluster for storage class conversion. Click . The Namespaces page opens. Select the required project. Click . The Persistent volumes page opens. The page displays the PVs in the project, all selected by default. For each PV, select the desired target storage class. Click . The wizard validates the new migration plan and shows that it is ready. Click Close . The new plan appears on the Migration plans page. To start the conversion, click the options menu of the new plan. Under Migrations , two options are displayed, Stage and Cutover . Note Cutover migration updates PVC references in the applications. Stage migration does not update PVC references in the applications. Select the desired option. Depending on which option you selected, the Stage migration or Cutover migration notification appears. Click Migrate . Depending on which option you selected, the Stage started or Cutover started message appears. To see the status of the current migration, click the number in the Migrations column. The Migrations page opens. To see more details on the current migration and monitor its progress, select the migration from the Type column. The Migration details page opens. When the migration progresses to the DirectVolume step and the status of the step becomes Running Rsync Pods to migrate Persistent Volume data , you can click View details and see the detailed status of the copies. In the breadcrumb bar, click Stage or Cutover and wait for all steps to complete. Open the PersistentVolumeClaims tab of the OpenShift Container Platform web console. You can see new PVCs with the names of the initial PVCs but ending in new , which are using the target storage class. In the left-side navigation pane, click Pods . See that the pod of your project is running again. Additional resources For details about the move and copy actions, see MTC workflow . For details about the skip action, see Excluding PVCs from migration . For details about the file system and snapshot copy methods, see About data copy methods . 9.4.7. Performing a state migration of Kubernetes objects by using the MTC API After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application. You do this by configuring MigPlan custom resource (CR) fields to provide a list of Kubernetes resources with an additional label selector to further filter those resources, and then performing a migration by creating a MigMigration CR. The MigPlan resource is closed after the migration. Note Selecting Kubernetes resources is an API-only feature. You must update the MigPlan CR and create a MigMigration CR for it by using the CLI. The MTC web console does not support migrating Kubernetes objects. Note After migration, the closed parameter of the MigPlan CR is set to true . You cannot create another MigMigration CR for this MigPlan CR. You add Kubernetes objects to the MigPlan CR by using one of the following options: Adding the Kubernetes objects to the includedResources section. When the includedResources field is specified in the MigPlan CR, the plan takes a list of group-kind as input. Only resources present in the list are included in the migration. Adding the optional labelSelector parameter to filter the includedResources in the MigPlan . When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list of Secret and ConfigMap resources by using the label app: frontend as a filter. Procedure Update the MigPlan CR to include Kubernetes resources and, optionally, to filter the included resources by adding the labelSelector parameter: To update the MigPlan CR to include Kubernetes resources: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" 1 Specify the Kubernetes object, for example, Secret or ConfigMap . Optional: To filter the included resources by adding the labelSelector parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label> 2 1 Specify the Kubernetes object, for example, Secret or ConfigMap . 2 Specify the label of the resources to migrate, for example, app: frontend . Create a MigMigration CR to migrate the selected Kubernetes resources. Verify that the correct MigPlan is referenced in migPlanRef : apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false 9.5. Migration controller options You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController custom resource (CR) for large migrations and improved performance. 9.5.1. Increasing limits for large migrations You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC). Important You must test these changes before you perform a migration in a production environment. Procedure Edit the MigrationController custom resource (CR) manifest: USD oc edit migrationcontroller -n openshift-migration Update the following parameters: ... mig_controller_limits_cpu: "1" 1 mig_controller_limits_memory: "10Gi" 2 ... mig_controller_requests_cpu: "100m" 3 mig_controller_requests_memory: "350Mi" 4 ... mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7 ... 1 Specifies the number of CPUs available to the MigrationController CR. 2 Specifies the amount of memory available to the MigrationController CR. 3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). 4 Specifies the amount of memory available for MigrationController CR requests. 5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes. If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan. 9.5.2. Enabling persistent volume resizing for direct volume migration You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster. When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster. A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3% . This means that PV resizing occurs when the disk usage of a PV is more than 97% . You can increase this threshold so that PV resizing occurs at a lower disk usage level. PVC capacity is calculated according to the following criteria: If the requested storage capacity ( spec.resources.requests.storage ) of the PVC is not equal to its actual provisioned capacity ( status.capacity.storage ), the greater value is used. If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used. Prerequisites The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands. Procedure Log in to the host cluster. Enable PV resizing by patching the MigrationController CR: USD oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ 1 --type='merge' -n openshift-migration 1 Set the value to false to disable PV resizing. Optional: Update the pv_resizing_threshold parameter to increase the threshold: USD oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ 1 --type='merge' -n openshift-migration 1 The default value is 3 . When the threshold is exceeded, the following status message is displayed in the MigPlan CR status: status: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequired Note For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. ( BZ#1973148 ) 9.5.3. Enabling cached Kubernetes clients You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency. Note Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates. You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients. Procedure Enable cached clients by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]' Optional: Increase the MigrationController CR memory limits by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]' Optional: Increase the MigrationController CR memory requests by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]'
[ "oc create route passthrough --service=image-registry -n openshift-image-registry", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF", "oc sa get-token migration-controller -n openshift-migration | base64 -w 0", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF", "oc describe cluster <cluster>", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF", "echo -n \"<key>\" | base64 -w 0 1", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF", "oc describe migstorage <migstorage>", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF", "oc describe migplan <migplan> -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF", "oc watch migmigration <migmigration> -n openshift-migration", "Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47", "- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces", "- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"", "- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail", "- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"", "oc edit migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2", "oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1", "name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims", "spec: namespaces: - namespace_2 - namespace_1:namespace_2", "spec: namespaces: - namespace_1:namespace_1", "spec: namespaces: - namespace_1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false", "oc edit migrationcontroller -n openshift-migration", "mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/migration_toolkit_for_containers/advanced-migration-options-mtc
Chapter 10. Server Cache Configuration
Chapter 10. Server Cache Configuration Red Hat Single Sign-On has two types of caches. One type of cache sits in front of the database to decrease load on the DB and to decrease overall response times by keeping data in memory. Realm, client, role, and user metadata is kept in this type of cache. This cache is a local cache. Local caches do not use replication even if you are in the cluster with more Red Hat Single Sign-On servers. Instead, they only keep copies locally and if the entry is updated an invalidation message is sent to the rest of the cluster and the entry is evicted. There is separate replicated cache work , which task is to send the invalidation messages to the whole cluster about what entries should be evicted from local caches. This greatly reduces network traffic, makes things efficient, and avoids transmitting sensitive metadata over the wire. The second type of cache handles managing user sessions, offline tokens, and keeping track of login failures so that the server can detect password phishing and other attacks. The data held in these caches is temporary, in memory only, but is possibly replicated across the cluster. This chapter discusses some configuration options for these caches for both clustered and non-clustered deployments. Note More advanced configuration of these caches can be found in the Infinispan section of the JBoss EAP Configuration Guide . 10.1. Eviction and Expiration There are multiple different caches configured for Red Hat Single Sign-On. There is a realm cache that holds information about secured applications, general security data, and configuration options. There is also a user cache that contains user metadata. Both caches default to a maximum of 10000 entries and use a least recently used eviction strategy. Each of them is also tied to an object revisions cache that controls eviction in a clustered setup. This cache is created implicitly and has twice the configured size. The same applies for the authorization cache, which holds the authorization data. The keys cache holds data about external keys and does not need to have dedicated revisions cache. Rather it has expiration explicitly declared on it, so the keys are periodically expired and forced to be periodically downloaded from external clients or identity providers. The eviction policy and max entries for these caches can be configured in the standalone.xml , standalone-ha.xml , or domain.xml depending on your operating mode . In the configuration file, there is the part with infinispan subsystem, which looks similar to this: <subsystem xmlns="urn:jboss:domain:infinispan:9.0"> <cache-container name="keycloak"> <local-cache name="realms"> <object-memory size="10000"/> </local-cache> <local-cache name="users"> <object-memory size="10000"/> </local-cache> ... <local-cache name="keys"> <object-memory size="1000"/> <expiration max-idle="3600000"/> </local-cache> ... </cache-container> To limit or expand the number of allowed entries simply add or edit the object element or the expiration element of particular cache configuration. In addition, there are also separate caches sessions , clientSessions , offlineSessions , offlineClientSessions , loginFailures and actionTokens . These caches are distributed in cluster environment and they are unbounded in size by default. If they are bounded, it would then be possible that some sessions will be lost. Expired sessions are cleared internally by Red Hat Single Sign-On itself to avoid growing the size of these caches without limit. If you see memory issues due to a large number of sessions, you can try to: Increase the size of cluster (more nodes in cluster means that sessions are spread more equally among nodes) Increase the memory for Red Hat Single Sign-On server process Decrease the number of owners to ensure that caches are saved in one single place. See Section 10.2, "Replication and Failover" for more details Disable l1-lifespan for distributed caches. See Infinispan documentation for more details Decrease session timeouts, which could be done individually for each realm in Red Hat Single Sign-On admin console. But this could affect usability for end users. See Timeouts for more details. There is an additional replicated cache, work , which is mostly used to send messages among cluster nodes; it is also unbounded by default. However, this cache should not cause any memory issues as entries in this cache are very short-lived. 10.2. Replication and Failover There are caches like sessions , authenticationSessions , offlineSessions , loginFailures and a few others (See Section 10.1, "Eviction and Expiration" for more details), which are configured as distributed caches when using a clustered setup. Entries are not replicated to every single node, but instead one or more nodes is chosen as an owner of that data. If a node is not the owner of a specific cache entry it queries the cluster to obtain it. What this means for failover is that if all the nodes that own a piece of data go down, that data is lost forever. By default, Red Hat Single Sign-On only specifies one owner for data. So if that one node goes down that data is lost. This usually means that users will be logged out and will have to login again. You can change the number of nodes that replicate a piece of data by change the owners attribute in the distributed-cache declaration. owners <subsystem xmlns="urn:jboss:domain:infinispan:9.0"> <cache-container name="keycloak"> <distributed-cache name="sessions" owners="2"/> ... Here we've changed it so at least two nodes will replicate one specific user login session. Tip The number of owners recommended is really dependent on your deployment. If you do not care if users are logged out when a node goes down, then one owner is good enough and you will avoid replication. Tip It is generally wise to configure your environment to use loadbalancer with sticky sessions. It is beneficial for performance as Red Hat Single Sign-On server, where the particular request is served, will be usually the owner of the data from the distributed cache and will therefore be able to look up the data locally. See Section 9.4, "Sticky sessions" for more details. 10.3. Disabling Caching To disable the realm or user cache, you must edit the standalone.xml , standalone-ha.xml , or domain.xml file in your distribution. The location of this file depends on your operating mode . Here's what the config looks like initially. <spi name="userCache"> <provider name="default" enabled="true"/> </spi> <spi name="realmCache"> <provider name="default" enabled="true"/> </spi> To disable the cache set the enabled attribute to false for the cache you want to disable. You must reboot your server for this change to take effect. 10.4. Clearing Caches at Runtime To clear the realm or user cache, go to the Red Hat Single Sign-On admin console Realm Settings->Cache Config page. On this page you can clear the realm cache, the user cache or cache of external public keys. Note The cache will be cleared for all realms!
[ "<subsystem xmlns=\"urn:jboss:domain:infinispan:9.0\"> <cache-container name=\"keycloak\"> <local-cache name=\"realms\"> <object-memory size=\"10000\"/> </local-cache> <local-cache name=\"users\"> <object-memory size=\"10000\"/> </local-cache> <local-cache name=\"keys\"> <object-memory size=\"1000\"/> <expiration max-idle=\"3600000\"/> </local-cache> </cache-container>", "<subsystem xmlns=\"urn:jboss:domain:infinispan:9.0\"> <cache-container name=\"keycloak\"> <distributed-cache name=\"sessions\" owners=\"2\"/>", "<spi name=\"userCache\"> <provider name=\"default\" enabled=\"true\"/> </spi> <spi name=\"realmCache\"> <provider name=\"default\" enabled=\"true\"/> </spi>" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_installation_and_configuration_guide/cache-configuration
Chapter 1. Navigating CodeReady Workspaces using the Dashboard
Chapter 1. Navigating CodeReady Workspaces using the Dashboard The Dashboard is accessible on your cluster from a URL like http:// <che-instance> . <IP-address> .mycluster.mycompany.com/dashboard/ . This section describes how to access this URL on OpenShift. 1.1. Logging in to CodeReady Workspaces on OpenShift for the first time using OAuth This section describes how to log in to CodeReady Workspaces on OpenShift for the first time using OAuth. Prerequisites Contact the administrator of the OpenShift instance to obtain the Red Hat CodeReady Workspaces URL . Procedure Navigate to the Red Hat CodeReady Workspaces URL to display the Red Hat CodeReady Workspaces login page. Choose the OpenShift OAuth option. The Authorize Access page is displayed. Click on the Allow selected permissions button. Update the account information: specify the Username , Email , First name and Last name fields and click the Submit button. Validation steps The browser displays the Red Hat CodeReady Workspaces Dashboard . 1.2. Logging in to CodeReady Workspaces on OpenShift for the first time registering as a new user This section describes how to log in to CodeReady Workspaces on OpenShift for the first time registering as a new user. Prerequisites Contact the administrator of the OpenShift instance to obtain the Red Hat CodeReady Workspaces URL . Procedure Navigate to the Red Hat CodeReady Workspaces URL to display the Red Hat CodeReady Workspaces login page. Choose the Register as a new user option. Update the account information: specify the Username , Email , First name and Last name field and click the Submit button. Validation steps The browser displays the Red Hat CodeReady Workspaces Dashboard . 1.3. Finding CodeReady Workspaces cluster URL using the OpenShift 4 CLI This section describes how to obtain the CodeReady Workspaces cluster URL using the OpenShift 4 CLI (command line interface). The URL can be retrieved from the OpenShift logs or from the checluster Custom Resource. Prerequisites An instance of Red Hat CodeReady Workspaces running on OpenShift. User is located in a CodeReady Workspaces installation namespace. Procedure To retrieve the CodeReady Workspaces cluster URL from the checluster CR (Custom Resource), run: USD oc get checluster --output jsonpath='{.items[0].status.cheURL}' Alternatively, to retrieve the CodeReady Workspaces cluster URL from the OpenShift logs, run: USD oc logs --tail=10 `(oc get pods -o name | grep operator)` | \ grep "available at" | \ awk -F'available at: ' '{print USD2}' | sed 's/"//'
[ "oc get checluster --output jsonpath='{.items[0].status.cheURL}'", "oc logs --tail=10 `(oc get pods -o name | grep operator)` | grep \"available at\" | awk -F'available at: ' '{print USD2}' | sed 's/\"//'" ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/end-user_guide/navigating-codeready-workspaces-using-the-dashboard_crw
Chapter 22. Glossary
Chapter 22. Glossary This glossary defines common terms that are used in the logging documentation. Annotation You can use annotations to attach metadata to objects. Red Hat OpenShift Logging Operator The Red Hat OpenShift Logging Operator provides a set of APIs to control the collection and forwarding of application, infrastructure, and audit logs. Custom resource (CR) A CR is an extension of the Kubernetes API. To configure the logging and log forwarding, you can customize the ClusterLogging and the ClusterLogForwarder custom resources. Event router The event router is a pod that watches OpenShift Container Platform events. It collects logs by using the logging. Fluentd Fluentd is a log collector that resides on each OpenShift Container Platform node. It gathers application, infrastructure, and audit logs and forwards them to different outputs. Garbage collection Garbage collection is the process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods. Elasticsearch Elasticsearch is a distributed search and analytics engine. OpenShift Container Platform uses Elasticsearch as a default log store for the logging. OpenShift Elasticsearch Operator The OpenShift Elasticsearch Operator is used to run an Elasticsearch cluster on OpenShift Container Platform. The OpenShift Elasticsearch Operator provides self-service for the Elasticsearch cluster operations and is used by the logging. Indexing Indexing is a data structure technique that is used to quickly locate and access data. Indexing optimizes the performance by minimizing the amount of disk access required when a query is processed. JSON logging The Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either the logging managed Elasticsearch or any other third-party system supported by the Log Forwarding API. Kibana Kibana is a browser-based console interface to query, discover, and visualize your Elasticsearch data through histograms, line graphs, and pie charts. Kubernetes API server Kubernetes API server validates and configures data for the API objects. Labels Labels are key-value pairs that you can use to organize and select subsets of objects, such as a pod. Logging With the logging, you can aggregate application, infrastructure, and audit logs throughout your cluster. You can also store them to a default log store, forward them to third party systems, and query and visualize the stored logs in the default log store. Logging collector A logging collector collects logs from the cluster, formats them, and forwards them to the log store or third party systems. Log store A log store is used to store aggregated logs. You can use an internal log store or forward logs to external log stores. Log visualizer Log visualizer is the user interface (UI) component you can use to view information such as logs, graphs, charts, and other metrics. Node A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. Operators Operators are the preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. Pod A pod is the smallest logical unit in Kubernetes. A pod consists of one or more containers and runs on a worker node. Role-based access control (RBAC) RBAC is a key security control to ensure that cluster users and workloads have access only to resources required to execute their roles. Shards Elasticsearch organizes log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards. Taint Taints ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. Toleration You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. Web console A user interface (UI) to manage OpenShift Container Platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/openshift-logging-common-terms
Chapter 8. Multicloud Object Gateway bucket replication
Chapter 8. Multicloud Object Gateway bucket replication Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (AWS S3, Azure, and so on). A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication. Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway, see link:Accessing the Multicloud Object Gateway with your applications. Download the Multicloud Object Gateway (MCG) command-line interface: Important Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Power use the following command: Alternatively, you can install the mcg package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Important Choose the correct Product Variant according to your architecture. Note Certain MCG features are only available in certain MCG versions, and the appropriate MCG CLI tool version must be used to fully utilize MCG's features. To replicate a bucket, see Replicating a bucket to another bucket . To set a bucket class replication policy, see Setting a bucket class replication policy . 8.1. Replicating a bucket to another bucket You can set the bucket replication policy in two ways: Replicating a bucket to another bucket using the MCG command-line interface . Replicating a bucket to another bucket using a YAML . 8.1.1. Replicating a bucket to another bucket using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC). You must define the replication policy parameter in a JSON file. Procedure From the MCG command-line interface, run the following command to create an OBC with a specific replication policy: <bucket-claim-name> Specify the name of the bucket claim. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . For example: 8.1.2. Replicating a bucket to another bucket using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC) or you can edit the YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: <desired-bucket-claim> Specify the name of the bucket claim. <desired-namespace> Specify the namespace. <desired-bucket-name> Specify the prefix of the bucket name. "rule_id" Specify the ID number of the rule, for example, {"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Additional information For more information about OBCs, see Object Bucket Claim . 8.2. Setting a bucket class replication policy It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways: Setting a bucket class replication policy using the MCG command-line interface . Setting a bucket class replication policy using a YAML . 8.2.1. Setting a bucket class replication policy using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class. You must define the replication-policy parameter in a JSON file. You can set a bucket class replication policy for the Placement and Namespace bucket classes. You can set a bucket class replication policy for the Placement and Namespace bucket classes. Procedure From the MCG command-line interface, run the following command: <bucketclass-name> Specify the name of the bucket class. <backingstores> Specify the name of a backingstore. You can pass many backingstores separated by commas. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . For example: This example creates a placement bucket class with a specific replication policy defined in the JSON file. 8.2.2. Setting a bucket class replication policy using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class or you can edit their YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket . <desired-app-label> Specify a label for the app. <desired-bucketclass-name> Specify the bucket class name. <desired-namespace> Specify the namespace in which the bucket class gets created. <backingstore> Specify the name of a backingstore. You can pass many backingstores. "rule_id" Specify the ID number of the rule, for example, `{"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . 8.3. Enabling log based bucket replication When creating a bucket replication policy, you can use logs so that recent data is replicated more quickly, while the default scan-based replication works on replicating the rest of the data. Important This feature requires setting up bucket logs on AWS or Azure.For more information about setting up AWS logs, see Enabling Amazon S3 server access logging . The AWS logs bucket needs to be created in the same region as the source NamespaceStore AWS bucket. Note This feature is only supported in buckets that are backed by a NamespaceStore. Buckets backed by BackingStores cannot utilized log-based replication. 8.3.1. Enabling log based bucket replication for new namespace buckets using OpenShift Web Console in Amazon Web Service environment You can optimize replication by using the event logs of the Amazon Web Service(AWS) cloud environment. You enable log based bucket replication for new namespace buckets using the web console during the creation of namespace buckets. Prerequisites Ensure that object logging is enabled in AWS. For more information, see the "Using the S3 console" section in Enabling Amazon S3 server access logging . Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage Object Storage Object Bucket Claims . Click Create ObjectBucketClaim . Enter the name of ObjectBucketName and select StorageClass and BucketClass. Select the Enable replication check box to enable replication. In the Replication policy section, select the Optimize replication using event logs checkbox. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix. 8.3.2. Enabling log based bucket replication for existing namespace buckets using YAML You can enable log based bucket replication for the existing buckets that are created using the command line interface or by applying an YAML, and not the buckets that are created using AWS S3 commands. Procedure Edit the YAML of the bucket's OBC to enable log based bucket replication. Add the following under spec : Note It is also possible to add this to the YAML of an OBC before it is created. rule_id Specify an ID of your choice for identifying the rule destination_bucket Specify the name of the target MCG bucket that the objects are copied to (optional) {"filter": {"prefix": <>}} Specify a prefix string that you can set to filter the objects that are replicated log_replication_info Specify an object that contains data related to log-based replication optimization. {"logs_location": {"logs_bucket": <>}} is set to the location of the AWS S3 server access logs. 8.3.3. Enabling log based bucket replication in Microsoft Azure Prerequisites Refer to Microsoft Azure documentation and ensure that you have completed the following tasks in the Microsoft Azure portal: Ensure that have created a new application and noted down the name, application (client) ID, and directory (tenant) ID. For information, see Register an application . Ensure that a new a new client secret is created and the application secret is noted down. Ensure that a new Log Analytics workspace is created and its name and workspace ID is noted down. For information, see Create a Log Analytics workspace . Ensure that the Reader role is assigned under Access control and members are selected and the name of the application that you registered in the step is provided. For more information, see Assign Azure roles using the Azure portal . Ensure that a new storage account is created and the Access keys are noted down. In the Monitoring section of the storage account created, select a blob and in the Diagnostic settings screen, select only StorageWrite and StorageDelete , and in the destination details add the Log Analytics workspace that you created earlier. Ensure that a blob is selected in the Diagnostic settings screen of the Monitoring section of the storage account created. Also, ensure that only StorageWrite and StorageDelete is selected and in the destination details, the Log Analytics workspace that you created earlier is added. For more information, see Diagnostic settings in Azure Monitor . Ensure that two new containers for object source and object destination are created. Administrator access to OpenShift Web Console. Procedure Create a secret with credentials to be used by the namespacestores . Create a NamespaceStore backed by a container created in Azure. For more information, see Adding a namespace bucket using the OpenShift Container Platform user interface . Create a new Namespace-Bucketclass and OBC that utilizes it. Check the object bucket name by looking in the YAML of target OBC, or by listing all S3 buckets, for example, - s3 ls . Use the following template to apply an Azure replication policy on your source OBC by adding the following in its YAML, under .spec : sync_deletion Specify a boolean value, true or false . destination_bucket Make sure to use the name of the object bucket, and not the claim. The name can be retrieved using the s3 ls command, or by looking for the value in an OBC's YAML. Verification steps Write objects to the source bucket. Wait until MCG replicates them. Delete the objects from the source bucket. Verify the objects were removed from the target bucket. 8.3.4. Enabling log-based bucket replication deletion Prerequisites Administrator access to OpenShift Web Console. AWS Server Access Logging configured for the desired bucket. Procedure In the OpenShift Web Console, navigate to Storage Object Storage Object Bucket Claims . Click Create new Object bucket claim . (Optional) In the Replication rules section, select the Sync deletion checkbox for each rule separately. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix.
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <desired-bucket-claim> namespace: <desired-namespace> spec: generateBucketName: <desired-bucket-name> storageClassName: openshift-storage.noobaa.io additionalConfig: replicationPolicy: |+ { \"rules\": [ {\"rule_id\":\"rule-1\", \"destination_bucket\":\"first.bucket\" } ] }", "noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: <desired-app-label> name: <desired-bucketclass-name> namespace: <desired-namespace> spec: placementPolicy: tiers: - backingstores: - <backingstore> placement: Spread replicationPolicy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]", "replicationPolicy: '{\"rules\":[{\"rule_id\":\"<RULE ID>\", \"destination_bucket\":\"<DEST>\", \"filter\": {\"prefix\": \"<PREFIX>\"}}], \"log_replication_info\": {\"logs_location\": {\"logs_bucket\": \"<LOGS_BUCKET>\"}}}'", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: TenantID: <AZURE TENANT ID ENCODED IN BASE64> ApplicationID: <AZURE APPLICATIOM ID ENCODED IN BASE64> ApplicationSecret: <AZURE APPLICATION SECRET ENCODED IN BASE64> LogsAnalyticsWorkspaceID: <AZURE LOG ANALYTICS WORKSPACE ID ENCODED IN BASE64> AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "replicationPolicy:'{\"rules\":[ {\"rule_id\":\"ID goes here\", \"sync_deletions\": \"<true or false>\"\", \"destination_bucket\":object bucket name\"} ], \"log_replication_info\":{\"endpoint_type\":\"AZURE\"}}'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/multicloud_object_gateway_bucket_replication
Chapter 4. Remote health monitoring
Chapter 4. Remote health monitoring OpenShift Data Foundation collects anonymized aggregated information about the health, usage, and size of clusters and reports it to Red Hat via an integrated component called Telemetry. This information allows Red Hat to improve OpenShift Data Foundation and to react to issues that impact customers more quickly. A cluster that reports data to Red Hat via Telemetry is considered a connected cluster . 4.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. These metrics are sent continuously and describe: The size of an OpenShift Data Foundation cluster The health and status of OpenShift Data Foundation components The health and status of any upgrade being performed Limited usage information about OpenShift Data Foundation components and features Summary info about alerts reported by the cluster monitoring component This continuous stream of data is used by Red Hat to monitor the health of clusters in real time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Data Foundation upgrades to customers so as to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and engineering teams with the same restrictions as accessing data reported via support cases. All connected cluster information is used by Red Hat to help make OpenShift Data Foundation better and more intuitive to use. None of the information is shared with third parties. 4.2. Information collected by Telemetry Primary information collected by Telemetry includes: The size of the Ceph cluster in bytes : "ceph_cluster_total_bytes" , The amount of the Ceph cluster storage used in bytes : "ceph_cluster_total_used_raw_bytes" , Ceph cluster health status : "ceph_health_status" , The total count of object storage devices (OSDs) : "job:ceph_osd_metadata:count" , The total number of OpenShift Data Foundation Persistent Volumes (PVs) present in the Red Hat OpenShift Container Platform cluster : "job:kube_pv:count" , The total input/output operations per second (IOPS) (reads+writes) value for all the pools in the Ceph cluster : "job:ceph_pools_iops:total" , The total IOPS (reads+writes) value in bytes for all the pools in the Ceph cluster : "job:ceph_pools_iops_bytes:total" , The total count of the Ceph cluster versions running : "job:ceph_versions_running:count" The total number of unhealthy NooBaa buckets : "job:noobaa_total_unhealthy_buckets:sum" , The total number of NooBaa buckets : "job:noobaa_bucket_count:sum" , The total number of NooBaa objects : "job:noobaa_total_object_count:sum" , The count of NooBaa accounts : "noobaa_accounts_num" , The total usage of storage by NooBaa in bytes : "noobaa_total_usage" , The total amount of storage requested by the persistent volume claims (PVCs) from a particular storage provisioner in bytes: "cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" , The total amount of storage used by the PVCs from a particular storage provisioner in bytes: "cluster:kubelet_volume_stats_used_bytes:provisioner:sum" . Telemetry does not collect identifying information such as user names, passwords, or the names or addresses of user resources.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/monitoring_openshift_data_foundation/remote_health_monitoring
6.6. Diagnosing and Correcting Problems in a Cluster
6.6. Diagnosing and Correcting Problems in a Cluster For information about diagnosing and correcting problems in a cluster, contact an authorized Red Hat support representative.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-admin-problems-CA
7.321. polkit
7.321. polkit 7.321.1. RHSA-2013:1270 - Important: polkit security update Updated polkit packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. PolicyKit is a toolkit for defining and handling authorizations. Security Fix CVE-2013-4288 A race condition was found in the way the PolicyKit pkcheck utility checked process authorization when the process was specified by its process ID via the --process option. A local user could use this flaw to bypass intended PolicyKit authorizations and escalate their privileges. Note: Applications that invoke pkcheck with the --process option need to be modified to use the pid,pid-start-time,uid argument for that option, to allow pkcheck to check process authorization correctly. Red Hat would like to thank Sebastian Krahmer of the SUSE Security Team for reporting this issue. All polkit users should upgrade to these updated packages, which contain a backported patch to correct this issue. The system must be rebooted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/polkit
15.2.3. Uninstalling
15.2.3. Uninstalling Uninstalling a package is just as simple as installing one. Type the following command at a shell prompt: Note Notice that we used the package name foo , not the name of the original package file foo-1.0-1.i386.rpm . To uninstall a package, replace foo with the actual package name of the original package. You can encounter a dependency error when uninstalling a package if another installed package depends on the one you are trying to remove. For example: To cause RPM to ignore this error and uninstall the package anyway, which may break the package depending on it, use the --nodeps option.
[ "-e foo", "error: Failed dependencies: foo is needed by (installed) bar-2.0.20-3.i386.rpm" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/using_rpm-uninstalling
Chapter 9. Preinstallation validations
Chapter 9. Preinstallation validations 9.1. Definition of preinstallation validations The Assisted Installer aims to make cluster installation as simple, efficient, and error-free as possible. The Assisted Installer performs validation checks on the configuration and the gathered telemetry before starting an installation. The Assisted Installer uses the information provided before installation, such as control plane topology, network configuration and hostnames. It will also use real time telemetry from the hosts you are attempting to install. When a host boots the discovery ISO, an agent will start on the host. The agent will send information about the state of the host to the Assisted Installer. The Assisted Installer uses all of this information to compute real time preinstallation validations. All validations are either blocking or non-blocking to the installation. 9.2. Blocking and non-blocking validations A blocking validation will prevent progress of the installation, meaning that you will need to resolve the issue and pass the blocking validation before you can proceed. A non-blocking validation is a warning and will tell you of things that might cause you a problem. 9.3. Validation types The Assisted Installer performs two types of validation: Host Host validations ensure that the configuration of a given host is valid for installation. Cluster Cluster validations ensure that the configuration of the whole cluster is valid for installation. 9.4. Host validations 9.4.1. Getting host validations by using the REST API Note If you use the web console, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the web console. You have hosts booted with the discovery ISO You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[])' Get non-passing validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[]) | map(select(.status=="failure" or .status=="pending")) | select(length>0)' 9.4.2. Host validations in detail Parameter Validation type Description connected non-blocking Checks that the host has recently communicated with the Assisted Installer. has-inventory non-blocking Checks that the Assisted Installer received the inventory from the host. has-min-cpu-cores non-blocking Checks that the number of CPU cores meets the minimum requirements. has-min-memory non-blocking Checks that the amount of memory meets the minimum requirements. has-min-valid-disks non-blocking Checks that at least one available disk meets the eligibility criteria. has-cpu-cores-for-role blocking Checks that the number of cores meets the minimum requirements for the host role. has-memory-for-role blocking Checks that the amount of memory meets the minimum requirements for the host role. ignition-downloadable blocking For Day 2 hosts, checks that the host can download ignition configuration from the Day 1 cluster. belongs-to-majority-group blocking The majority group is the largest full-mesh connectivity group on the cluster, where all members can communicate with all other members. This validation checks that hosts in a multi-node, Day 1 cluster are in the majority group. valid-platform-network-settings blocking Checks that the platform is valid for the network settings. ntp-synced non-blocking Checks if an NTP server has been successfully used to synchronize time on the host. container-images-available non-blocking Checks if container images have been successfully pulled from the image registry. sufficient-installation-disk-speed blocking Checks that disk speed metrics from an earlier installation meet requirements, if they exist. sufficient-network-latency-requirement-for-role blocking Checks that the average network latency between hosts in the cluster meets the requirements. sufficient-packet-loss-requirement-for-role blocking Checks that the network packet loss between hosts in the cluster meets the requirements. has-default-route blocking Checks that the host has a default route configured. api-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the API domain name for the cluster. api-int-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal API domain name for the cluster. apps-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal apps domain name for the cluster. compatible-with-cluster-platform non-blocking Checks that the host is compatible with the cluster platform dns-wildcard-not-configured blocking Checks that the wildcard DNS *.<cluster_name>.<base_domain> is not configured, because this causes known problems for OpenShift disk-encryption-requirements-satisfied non-blocking Checks that the type of host and disk encryption configured meet the requirements. non-overlapping-subnets blocking Checks that this host does not have any overlapping subnets. hostname-unique blocking Checks that the hostname is unique in the cluster. hostname-valid blocking Checks the validity of the hostname, meaning that it matches the general form of hostnames and is not forbidden. The hostname must have 63 characters or less. The hostname must start and end with a lowercase alphanumeric character. The hostname must have only lowercase alphanumeric characters, dashes, and periods. belongs-to-machine-cidr blocking Checks that the host IP is in the address range of the machine CIDR. lso-requirements-satisfied blocking Validates that the host meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the host meets the requirements of the OpenShift Data Foundation Operator. Each host running ODF workloads (control plane nodes in compact mode, compute nodes in standard mode) requires an eligible disk. This is a disk with at least 25GB that is not the installation disk and is of type SSD or HDD . All hosts must have manually assigned roles. cnv-requirements-satisfied blocking Validates that the host meets the requirements of Container Native Virtualization. The BIOS of the host must have CPU virtualization enabled. Host must have enough CPU cores and RAM available for Container Native Virtualization. Will validate the Host Path Provisioner if necessary. lvm-requirements-satisfied blocking Validates that the host meets the requirements of the Logical Volume Manager Storage Operator. Host has at least one additional empty disk, not partitioned and not formatted. vsphere-disk-uuid-enabled non-blocking Verifies that each valid disk sets disk.EnableUUID to TRUE . In vSphere this will result in each disk having a UUID. compatible-agent blocking Checks that the discovery agent version is compatible with the agent docker image version. no-skip-installation-disk blocking Checks that installation disk is not skipping disk formatting. no-skip-missing-disk blocking Checks that all disks marked to skip formatting are in the inventory. A disk ID can change on reboot, and this validation prevents issues caused by that. media-connected blocking Checks the connection of the installation media to the host. machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. id-platform-network-settings blocking Checks that the platform is compatible with the network settings. Some platforms are only permitted when installing Single Node Openshift or when using User Managed Networking. mtu-valid non-blocking Checks the maximum transmission unit (MTU) of hosts and networking devices in the cluster environment to identify compatibility issues. For more information, see Additional resources . Additional resources Changing the MTU for the cluster network 9.5. Cluster validations 9.5.1. Getting cluster validations by using the REST API If you use the web console, many of these validations will not show up by name. To obtain a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the web console. You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq 'map(.[])' Get non-passing cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq '. | map(.[] | select(.status=="failure" or .status=="pending")) | select(length>0)' 9.5.2. Cluster validations in detail Parameter Validation type Description machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. cluster-cidr-defined non-blocking Checks that the cluster network definition exists for the cluster. service-cidr-defined non-blocking Checks that the service network definition exists for the cluster. no-cidrs-overlapping blocking Checks that the defined networks do not overlap. networks-same-address-families blocking Checks that the defined networks share the same address families (valid address families are IPv4, IPv6) network-prefix-valid blocking Checks the cluster network prefix to ensure that it is valid and allows enough address space for all hosts. machine-cidr-equals-to-calculated-cidr blocking For a non user managed networking cluster. Checks that apiVIPs or ingressVIPs are members of the machine CIDR if they exist. api-vips-defined non-blocking For a non user managed networking cluster. Checks that apiVIPs exist. api-vips-valid blocking For a non user managed networking cluster. Checks if the apiVIPs belong to the machine CIDR and are not in use. ingress-vips-defined blocking For a non user managed networking cluster. Checks that ingressVIPs exist. ingress-vips-valid non-blocking For a non user managed networking cluster. Checks if the ingressVIPs belong to the machine CIDR and are not in use. all-hosts-are-ready-to-install blocking Checks that all hosts in the cluster are in the "ready to install" status. sufficient-masters-count blocking For a multi-node OpenShift Container Platform installation, checks that the current number of hosts in the cluster designated either manually or automatically to be control plane (master) nodes equals the number that the user defined for the cluster as the control_plane_count value. For a single-node OpenShift installation, checks that there is exactly one control plane (master) node and no compute (worker) nodes. dns-domain-defined non-blocking Checks that the base DNS domain exists for the cluster. pull-secret-set non-blocking Checks that the pull secret exists. Does not check that the pull secret is valid or authorized. ntp-server-configured blocking Checks that each of the host clocks are no more than 4 minutes out of sync with each other. lso-requirements-satisfied blocking Validates that the cluster meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the cluster meets the requirements of the OpenShift Data Foundation Operator. The cluster has either at least three control plane (master) nodes and no compute (worker) nodes at all ( compact mode), or at least three control plane (master) nodes and at least three compute (worker) nodes ( standard mode). Each host running ODF workloads (control plane nodes in compact mode, compute nodes in standard mode) requires a non-installation disk of type SSD` or HDD and with at least 25GB of storage. All hosts must have manually assigned roles. cnv-requirements-satisfied blocking Validates that the cluster meets the requirements of Container Native Virtualization. The CPU architecture for the cluster is x86 lvm-requirements-satisfied blocking Validates that the cluster meets the requirements of the Logical Volume Manager Storage Operator. The cluster must be single node. The cluster must be running Openshift >= 4.11.0. network-type-valid blocking Checks the validity of the network type if it exists. The network type must be OpenshiftSDN (OpenShift Container Platform 4.14 or earlier) or OVNKubernetes. OpenshiftSDN does not support IPv6 or Single Node Openshift. OpenshiftSDN is not supported for OpenShift Container Platform 4.15 and later releases. OVNKubernetes does not support VIP DHCP allocation.
[ "source refresh-token", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[])'", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[]) | map(select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'", "source refresh-token", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq 'map(.[])'", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq '. | map(.[] | select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'" ]
https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2025/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_preinstallation-validations
Chapter 22. Write Barriers
Chapter 22. Write Barriers A write barrier is a kernel mechanism used to ensure that file system metadata is correctly written and ordered on persistent storage, even when storage devices with volatile write caches lose power. File systems with write barriers enabled ensures that data transmitted via fsync() is persistent throughout a power loss. Enabling write barriers incurs a substantial performance penalty for some applications. Specifically, applications that use fsync() heavily or create and delete many small files will likely run much slower. 22.1. Importance of Write Barriers File systems safely update metadata, ensuring consistency. Journalled file systems bundle metadata updates into transactions and send them to persistent storage in the following manner: The file system sends the body of the transaction to the storage device. The file system sends a commit block. If the transaction and its corresponding commit block are written to disk, the file system assumes that the transaction will survive any power failure. However, file system integrity during power failure becomes more complex for storage devices with extra caches. Storage target devices like local S-ATA or SAS drives may have write caches ranging from 32MB to 64MB in size (with modern drives). Hardware RAID controllers often contain internal write caches. Further, high end arrays, like those from NetApp, IBM, Hitachi and EMC (among others), also have large caches. Storage devices with write caches report I/O as "complete" when the data is in cache; if the cache loses power, it loses its data as well. Worse, as the cache de-stages to persistent storage, it may change the original metadata ordering. When this occurs, the commit block may be present on disk without having the complete, associated transaction in place. As a result, the journal may replay these uninitialized transaction blocks into the file system during post-power-loss recovery; this will cause data inconsistency and corruption. How Write Barriers Work Write barriers are implemented in the Linux kernel via storage write cache flushes before and after the I/O, which is order-critical . After the transaction is written, the storage cache is flushed, the commit block is written, and the cache is flushed again. This ensures that: The disk contains all the data. No re-ordering has occurred. With barriers enabled, an fsync() call also issues a storage cache flush. This guarantees that file data is persistent on disk even if power loss occurs shortly after fsync() returns.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-writebarriers