title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 14. Performance and reliability tuning
Chapter 14. Performance and reliability tuning 14.1. Flow control mechanisms If logs are produced faster than they can be collected, it can be difficult to predict or control the volume of logs being sent to an output. Not being able to predict or control the volume of logs being sent to an output can result in logs being lost. If there is a system outage and log buffers are accumulated without user control, this can also cause long recovery times and high latency when the connection is restored. As an administrator, you can limit logging rates by configuring flow control mechanisms for your logging. 14.1.1. Benefits of flow control mechanisms The cost and volume of logging can be predicted more accurately in advance. Noisy containers cannot produce unbounded log traffic that drowns out other containers. Ignoring low-value logs reduces the load on the logging infrastructure. High-value logs can be preferred over low-value logs by assigning higher rate limits. 14.1.2. Configuring rate limits Rate limits are configured per collector, which means that the maximum rate of log collection is the number of collector instances multiplied by the rate limit. Because logs are collected from each node's file system, a collector is deployed on each cluster node. For example, in a 3-node cluster, with a maximum rate limit of 10 records per second per collector, the maximum rate of log collection is 30 records per second. Because the exact byte size of a record as written to an output can vary due to transformations, different encodings, or other factors, rate limits are set in number of records instead of bytes. You can configure rate limits in the ClusterLogForwarder custom resource (CR) in two ways: Output rate limit Limit the rate of outbound logs to selected outputs, for example, to match the network or storage capacity of an output. The output rate limit controls the aggregated per-output rate. Input rate limit Limit the per-container rate of log collection for selected containers. 14.1.3. Configuring log forwarder output rate limits You can limit the rate of outbound logs to a specified output by configuring the ClusterLogForwarder custom resource (CR). Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. Procedure Add a maxRecordsPerSecond limit value to the ClusterLogForwarder CR for a specified output. The following example shows how to configure a per collector output rate limit for a Kafka broker output named kafka-example : Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3 # ... 1 The output name. 2 The type of output. 3 The log output rate limit. This value sets the maximum Quantity of logs that can be sent to the Kafka broker per second. This value is not set by default. The default behavior is best effort, and records are dropped if the log forwarder cannot keep up. If this value is 0 , no logs are forwarded. Apply the ClusterLogForwarder CR: Example command USD oc apply -f <filename>.yaml Additional resources Log output types 14.1.4. Configuring log forwarder input rate limits You can limit the rate of incoming logs that are collected by configuring the ClusterLogForwarder custom resource (CR). You can set input limits on a per-container or per-namespace basis. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. Procedure Add a maxRecordsPerSecond limit value to the ClusterLogForwarder CR for a specified input. The following examples show how to configure input rate limits for different scenarios: Example ClusterLogForwarder CR that sets a per-container limit for containers with certain labels apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3 # ... 1 The input name. 2 A list of labels. If these labels match labels that are applied to a pod, the per-container limit specified in the maxRecordsPerSecond field is applied to those containers. 3 Configures the rate limit. Setting the maxRecordsPerSecond field to 0 means that no logs are collected for the container. Setting the maxRecordsPerSecond field to some other value means that a maximum of that number of records per second are collected for the container. Example ClusterLogForwarder CR that sets a per-container limit for containers in selected namespaces apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000 # ... 1 The input name. 2 A list of namespaces. The per-container limit specified in the maxRecordsPerSecond field is applied to all containers in the namespaces listed. 3 Configures the rate limit. Setting the maxRecordsPerSecond field to 10 means that a maximum of 10 records per second are collected for each container in the namespaces listed. Apply the ClusterLogForwarder CR: Example command USD oc apply -f <filename>.yaml 14.2. Filtering logs by content Collecting all logs from a cluster might produce a large amount of data, which can be expensive to transport and store. You can reduce the volume of your log data by filtering out low priority data that does not need to be stored. Logging provides content filters that you can use to reduce the volume of log data. Note Content filters are distinct from input selectors. input selectors select or ignore entire log streams based on source metadata. Content filters edit log streams to remove and modify records based on the record content. Log data volume can be reduced by using one of the following methods: Configuring content filters to drop unwanted log records Configuring content filters to prune log records 14.2.1. Configuring content filters to drop unwanted log records When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ... 1 Specifies the type of filter. The drop filter drops log records that match the filter configuration. 2 Specifies configuration options for applying the drop filter. 3 Specifies the configuration for tests that are used to evaluate whether a log record is dropped. If all the conditions specified for a test are true, the test passes and the log record is dropped. When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. 4 Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. 5 Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 6 Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 7 Specifies the pipeline that the drop filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional examples The following additional example shows how you can configure the drop filter to only keep higher priority log records: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: filters: - name: important type: drop drop: test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ... In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: filters: - name: important type: drop drop: test: - field: .kubernetes.namespace_name matches: "^open" test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ... 14.2.2. Configuring content filters to prune log records When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: Important If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ... 1 Specify the type of filter. The prune filter prunes log records by configured fields. 2 Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . 3 Optional: Any fields that are specified in this array are removed from the log record. 4 Optional: Any fields that are not specified in this array are removed from the log record. 5 Specify the pipeline that the prune filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 14.2.3. Additional resources About forwarding logs to third-party systems 14.3. Filtering logs by metadata You can filter logs in the ClusterLogForwarder CR to select or ignore an entire log stream based on the metadata by using the input selector. As an administrator or developer, you can include or exclude the log collection to reduce the memory and CPU load on the collector. Important You can use this feature only if the Vector collector is set up in your logging deployment. Note input spec filtering is different from content filtering. input selectors select or ignore entire log streams based on the source metadata. Content filters edit the log streams to remove and modify the records based on the record content. 14.3.1. Filtering application logs at input by including or excluding the namespace or container name You can include or exclude the application logs based on the namespace and container name by using the input selector. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: Example ClusterLogForwarder CR apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder # ... spec: inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 # ... 1 Specifies that the logs are only collected from these namespaces. 2 Specifies that the logs are only collected from these containers. 3 Specifies the pattern of namespaces to ignore when collecting the logs. 4 Specifies the set of containers to ignore when collecting the logs. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml The excludes option takes precedence over includes . 14.3.2. Filtering application logs at input by including either the label expressions or matching label key and values You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: Example ClusterLogForwarder CR apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder # ... spec: inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 # ... 1 Specifies the label key to match. 2 Specifies the operator. Valid values include: In , NotIn , Exists , and DoesNotExist . 3 Specifies an array of string values. If the operator value is either Exists or DoesNotExist , the value array must be empty. 4 Specifies an exact key or value mapping. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 14.3.3. Filtering the audit and infrastructure log inputs by source You can define the list of audit and infrastructure sources to collect the logs by using the input selector. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to define aduit and infrastructure sources: Example ClusterLogForwarder CR apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder # ... spec: inputs: - name: mylogs1 infrastructure: sources: 1 - node - name: mylogs2 audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ... 1 Specifies the list of infrastructure sources to collect. The valid sources include: node : Journal log from the node container : Logs from the workloads deployed in the namespaces 2 Specifies the list of audit sources to collect. The valid sources include: kubeAPI : Logs from the Kubernetes API servers openshiftAPI : Logs from the OpenShift API servers auditd : Logs from a node auditd service ovn : Logs from an open virtual network service Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml
[ "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .kubernetes.namespace_name matches: \"^open\" test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]", "oc apply -f <filename>.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4", "oc apply -f <filename>.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1", "oc apply -f <filename>.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs1 infrastructure: sources: 1 - node - name: mylogs2 audit: sources: 2 - kubeAPI - openshiftAPI - ovn", "oc apply -f <filename>.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/performance-and-reliability-tuning
5.59. elinks
5.59. elinks 5.59.1. RHSA-2013:0250 - Moderate: elinks security update An updated elinks package that fixes one security issue is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. ELinks is a text-based web browser. ELinks does not display any images, but it does support frames, tables, and most other HTML tags. Security Fix CVE-2012-4545 It was found that ELinks performed client credentials delegation during the client-to-server GSS security mechanisms negotiation. A rogue server could use this flaw to obtain the client's credentials and impersonate that client to other servers that are using GSSAPI. This issue was discovered by Marko Myllynen of Red Hat. All ELinks users are advised to upgrade to this updated package, which contains a backported patch to resolve the issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/elinks
24.2. Default Settings
24.2. Default Settings After defining the Server Name , Webmaster email address , and Available Addresses , click the Virtual Hosts tab and click the Edit Default Settings button. A window as shown in Figure 24.3, "Site Configuration" appears. Configure the default settings for your Web server in this window. If you add a virtual host, the settings you configure for the virtual host take precedence for that virtual host. For a directive not defined within the virtual host settings, the default value is used. 24.2.1. Site Configuration The default values for the Directory Page Search List and Error Pages work for most servers. If you are unsure of these settings, do not modify them. Figure 24.3. Site Configuration The entries listed in the Directory Page Search List define the DirectoryIndex directive. The DirectoryIndex is the default page served by the server when a user requests an index of a directory by specifying a forward slash (/) at the end of the directory name. For example, when a user requests the page http://www.example.com/this_directory/ , they are going to get either the DirectoryIndex page, if it exists, or a server-generated directory list. The server tries to find one of the files listed in the DirectoryIndex directive and returns the first one it finds. If it does not find any of these files and if Options Indexes is set for that directory, the server generates and returns a list, in HTML format, of the subdirectories and files in the directory. Use the Error Code section to configure Apache HTTP Server to redirect the client to a local or external URL in the event of a problem or error. This option corresponds to the ErrorDocument directive. If a problem or error occurs when a client tries to connect to the Apache HTTP Server, the default action is to display the short error message shown in the Error Code column. To override this default configuration, select the error code and click the Edit button. Choose Default to display the default short error message. Choose URL to redirect the client to an external URL and enter a complete URL, including the http:// , in the Location field. Choose File to redirect the client to an internal URL and enter a file location under the document root for the Web server. The location must begin the a slash (/) and be relative to the Document Root. For example, to redirect a 404 Not Found error code to a webpage that you created in a file called 404.html , copy 404.html to DocumentRoot /../error/404.html . In this case, DocumentRoot is the Document Root directory that you have defined (the default is /var/www/html/ ). If the Document Root is left as the default location, the file should be copied to /var/www/error/404.html . Then, choose File as the Behavior for 404 - Not Found error code and enter /error/404.html as the Location . From the Default Error Page Footer menu, you can choose one of the following options: Show footer with email address - Display the default footer at the bottom of all error pages along with the email address of the website maintainer specified by the ServerAdmin directive. Refer to Section 24.3.1.1, "General Options" for information about configuring the ServerAdmin directive. Show footer - Display just the default footer at the bottom of error pages. No footer - Do not display a footer at the bottom of error pages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/httpd_configuration-default_settings
Chapter 7. Enabling simple content access with Red Hat Satellite
Chapter 7. Enabling simple content access with Red Hat Satellite For Satellite, new allocations and manifests have used simple content access by default since the release of Satellite 6.9. New Satellite organizations have used simple content access by default since the release of Satellite 6.13, where the setting on the organization overrides any setting on the manifest. For Red Hat accounts and organizations that primarily use Satellite, versions 6.15 and earlier can continue to support an entitlement-based workflow for the remainder of the supported lifecycle for those versions. However, Satellite version 6.16 and later versions support only the simple content access workflow. For the most recent information about the interactions of simple content access and specific versions of Satellite, see the Transition of Red Hat's subscription services to the Red Hat Hybrid Cloud Console (console.redhat.com) Red Hat Customer Portal article. 7.1. Enabling simple content access on an existing Satellite allocation and manifest For Satellite version 6.16, manual activation of simple content access is no longer necessary. For supported versions of Satellite 6.15 and earlier, see the following articles for the most recent information about simple content access enablement: Simple Content Access Transition of Red Hat's subscription services to the Red Hat Hybrid Cloud Console (console.redhat.com) 7.2. Completing post-enablement steps for Satellite After you enable simple content access, the way that you interact with some subscription management tools, including Satellite, differs. You must make some changes in Satellite to accommodate these different workflows and the individual behaviors within them. 7.2.1. Configuring activation keys and refreshing manifests When you change from the entitlement mode to the simple content access mode, workflows that rely on existing activation keys and manifests are affected. You must create new activation keys that contain only content-related functions and do not contain the subscription-related functions that relied on attaching subscriptions to individual systems through entitlements. If you are using Satellite 6.13 or later, you must add the renewed subscriptions to a manifest and refresh it at the time of subscription renewal. If you are using Satellite 6.12 or earlier, after you enable simple content access on the manifest you must refresh it. Note that Satellite versions 6.13 and earlier are out of support. Note For Satellite version support information, see the Red Hat Satellite Product Life Cycle life cycle and update policy document. For more information about the effects that a change to simple content access has on existing activation keys and manifests, see the following articles: Simple Content Access Transition of Red Hat's subscription services to the Red Hat Hybrid Cloud Console (console.redhat.com) 7.2.2. Updating host groups Use the following steps to update each relevant host group to use the new activation keys. You can also perform these steps from the hammer command line interface. From the Satellite web UI navigation, click Configure > Host Groups . Click the host group that you want to update. Then click the Activation Keys tab. On the Activation Keys page, enter the new activation key for the host group, replacing the old activation keys. Click Reload data to confirm the activation key change for the host group. Click Submit to save the host group changes. 7.2.3. Reconfiguring hosts For Red Hat Satellite, the new activation keys that you create for simple content access apply only to newly provisioned hosts. For existing hosts, you must do some reconfiguration and re-enable repositories. When simple content access is enabled, all repositories are disabled by default if a host does not have a subscription attached. This default setting prevents conflicting repositories from being enabled when a host has access to repositories that span multiple operating system versions. To do these changes, you can use the following commands as a snippet in a remote job that runs with the remote execution function of Red Hat Satellite. Comments are included in the following snippet to help you understand the series of tasks. You can also run these commands locally on each host, but using the bulk host management and remote execution capabilities of Red Hat Satellite during a maintenance window is more efficient. 7.2.4. Completing additional post-enablement steps After the migration for your Red Hat account and organization is complete and simple content access is enabled, review the articles in the Additional resources section for more information about using the simple content access mode and configuring and working with the services in the Hybrid Cloud Console. Ensure that you understand how this change to the simple content access mode affects the workflow that your organization uses. If you had any customized processes that relied upon artifacts from the old entitlement-based mode, such as checking for valid subscriptions on a per-system basis, these processes will need to be discarded or redesigned to be compatible with the new simple content access workflow. Find out more about additional services in the Hybrid Cloud Console that can improve your subscription and system management processes and determine if you are taking advantage of them. See the Hybrid Cloud Console at https://console.redhat.com to explore these services. Authorize your Red Hat organization's users to access the services of the Red Hat Hybrid Cloud Console by setting up user groups, assigning roles, and doing other tasks in the role-based user access control (RBAC) system. Authorize your Red Hat organization's users to view system inventory data with appropriate filtering by creating workspaces that classify systems into logical groups. Configure Hybrid Cloud Console notifications so that alerts about specific events in Hybrid Cloud Console services can go to a named group of users or go to applications, APIs, or webhooks for additional custom actions. Activate the subscriptions service, if this service is not already active, to begin account-wide usage reporting of Red Hat products. Explore the capabilities, including subscription and system management capabilities, of the Hybrid Cloud Console and how workflows for some of these capabilities might have changed from the workflows that were previously available in the Red Hat Customer Portal at access.redhat.com: Tracking usage reporting for Red Hat products and variants on the product platforms pages of the subscriptions service. Tracking and managing your system infrastructure in the inventory service. Using activation keys to help with system registration, setting system purpose, and enabling repositories. Creating and exporting manifests for use within your Red Hat Satellite environment to find, access, and download content from the Red Hat Content Delivery Network. Determining whether the additional capabilities of Red Hat Insights, including the advisor, vulnerability, remediation, patch, and other services are right for your environment. Additional resources The following articles are actively being updated to address customer questions and concerns during and after the account migration process that began on October 25, 2024. Transition of Red Hat's subscription services to the Red Hat Hybrid Cloud Console (console.redhat.com) Transitioning Red Hat Subscription Management to the Hybrid Cloud Console Simple Content Access
[ "Get a list of all the currently enabled repos REPOS=USD(subscription-manager repos --list-enabled | grep \"Repo ID\" | cut -f 2 -d ':' ) (Optional) dump that list to a file in case of errors echo USDREPOS >> ENABLED_REPOS.txt Construct a command line to pass to 'subscription-manager repos' so that we call it once, instead of once per repo. This would lower the number of API calls and load on the Satellite. CMDLINE=USD(echo USDREPOS | sed 's/ / --enable /g') Disable all the repos & Remove any existing entitlements subscription-manager repos --disable '*' subscription-manager remove --all Call subscription-manager fresh to ensure that we have a content access cert (which is the authorization method when SCA is enabled) subscription-manager refresh Finally (re) enable the correct repos. subscription-manager repos --enable USDCMDLINE" ]
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_simple_content_access/proc-enabling-simplecontent-with-satellite_assembly-simplecontent-ctxt
Chapter 9. Technology Previews
Chapter 9. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.9. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 9.1. Infrastructure services Socket API for TuneD available as a Technology Preview The socket API for controlling TuneD through a UNIX domain socket is now available as a Technology Preview. The socket API maps one-to-one with the D-Bus API and provides an alternative communication method for cases where D-Bus is not available. By using the socket API, you can control the TuneD daemon to optimize the performance, and change the values of various tuning parameters. The socket API is disabled by default, you can enable it in the tuned-main.conf file. Bugzilla:2113900 9.2. Networking AF_XDP available as a Technology Preview Address Family eXpress Data Path ( AF_XDP ) socket is designed for high-performance packet processing. It accompanies XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing. Bugzilla:1633143 [1] XDP features that are available as Technology Preview Red Hat provides the usage of the following eXpress Data Path (XDP) features as unsupported Technology Preview: Loading XDP programs on architectures other than AMD and Intel 64-bit. Note that the libxdp library is not available for architectures other than AMD and Intel 64-bit. The XDP hardware offloading. Bugzilla:1889737 Multi-protocol Label Switching for TC available as a Technology Preview The Multi-protocol Label Switching (MPLS) is an in-kernel data-forwarding mechanism to route traffic flow across enterprise networks. In an MPLS network, the router that receives packets decides the further route of the packets based on the labels attached to the packet. With the usage of labels, the MPLS network has the ability to handle packets with particular characteristics. For example, you can add tc filters for managing packets received from specific ports or carrying specific types of traffic, in a consistent way. After packets enter the enterprise network, MPLS routers perform multiple operations on the packets, such as push to add a label, swap to update a label, and pop to remove a label. MPLS allows defining actions locally based on one or multiple labels in RHEL. You can configure routers and set traffic control ( tc ) filters to take appropriate actions on the packets based on the MPLS label stack entry ( lse ) elements, such as label , traffic class , bottom of stack , and time to live . For example, the following command adds a filter to the enp0s1 network interface to match incoming packets having the first label 12323 and the second label 45832 . On matching packets, the following actions are taken: the first MPLS TTL is decremented (packet is dropped if TTL reaches 0) the first MPLS label is changed to 549386 the resulting packet is transmitted over enp0s2 , with destination MAC address 00:00:5E:00:53:01 and source MAC address 00:00:5E:00:53:02 Bugzilla:1814836 [1] , Bugzilla:1856415 act_mpls module available as a Technology Preview The act_mpls module is now available in the kernel-modules-extra rpm as a Technology Preview. The module allows the application of Multiprotocol Label Switching (MPLS) actions with Traffic Control (TC) filters, for example, push and pop MPLS label stack entries with TC filters. The module also allows the Label, Traffic Class, Bottom of Stack, and Time to Live fields to be set independently. Bugzilla:1839311 [1] The systemd-resolved service is now available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, a Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that, even if the systemd package provides systemd-resolved , this service is an unsupported Technology Preview. Bugzilla:1906489 9.3. Kernel Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol that implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which maintains two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 8. Bugzilla:1605216 [1] eBPF available as a Technology Preview Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine includes a new system call bpf() , which enables creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the bpf() syscall can be successfully used only by a user with the CAP_SYS_ADMIN capability, such as the root user. See the bpf(2) manual page for more information. The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase. All components are available as a Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as a Technology Preview: AF_XDP , a socket for connecting the eXpress Data Path (XDP) path to user space for applications that prioritize packet processing performance. Bugzilla:1559616 [1] The kexec fast reboot feature is available as a Technology Preview The kexec fast reboot feature continues to be available as a Technology Preview. The kexec fast reboot significantly speeds the boot process as you can boot directly into the second kernel without passing through the Basic Input/Output System (BIOS) or firmware first. To use this feature: Load the kexec kernel manually. Reboot for changes to take effect. Note that the kexec fast reboot capability is available with a limited scope of support on RHEL 9 and later releases. Bugzilla:1769727 The Intel data streaming accelerator driver for kernel is available as a Technology Preview The Intel data streaming accelerator driver (IDXD) for the kernel is currently available as a Technology Preview. It is an Intel CPU integrated accelerator and includes a shared work queue with process address space ID (pasid) submission and shared virtual memory (SVM). Bugzilla:1837187 [1] The accel-config package available as a Technology Preview The accel-config package is now available on Intel EM64T and AMD64 architectures as a Technology Preview. This package helps in controlling and configuring data-streaming accelerator (DSA) sub-system in the Linux Kernel. Also, it configures devices through sysfs (pseudo-filesystem), saves and loads the configuration in the json format. Bugzilla:1843266 [1] SGX available as a Technology Preview Software Guard Extensions (SGX) is an Intel(R) technology for protecting software code and data from disclosure and modification. The RHEL kernel partially provides the SGX v1 and v1.5 functionality. Version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology. Version 2 adds Enclave Dynamic Memory Management (EDMM). Notable features include: Modifying EPCM permissions of regular enclave pages that belong to an initialized enclave. Dynamic addition of regular enclave pages to an initialized enclave. Expanding an initialized enclave to accommodate more threads. Removing regular and TCS pages from an initialized enclave. Bugzilla:1660337 [1] 9.4. File systems and storage File system DAX is now available for ext4 and XFS as a Technology Preview In Red Hat Enterprise Linux 8, the file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that provides the capability of DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, a mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. Bugzilla:1627455 [1] OverlayFS OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated. Full support is available for OverlayFS when used with supported container engines ( podman , cri-o , or buildah ) under the following restrictions: OverlayFS is supported for use only as a container engine graph driver or other specialized use cases, such as squashed kdump initramfs. Its use is supported primarily for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. Additionally, the following rules and limitations apply to using OverlayFS: The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates. OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant: Lower files opened with O_RDONLY do not receive st_atime updates when the files are read. Lower files opened with O_RDONLY , then mapped with MAP_SHARED are inconsistent with subsequent modification. Fully compliant st_ino or d_ino values are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option. To get consistent inode numbering, use the xino=on mount option. You can also use the redirect_dir=on and index=on options to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with redirect_dir=on or index=on , unmount the overlay, then mount the overlay without these options. To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the ftype=1 option is enabled: SELinux security labels are enabled by default in all supported container engines with OverlayFS. Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . For more information about OverlayFS, see the Linux kernel documentation . Bugzilla:1690207 [1] Stratis is now available as a Technology Preview Stratis is a new local storage manager, which provides managed file systems on top of pools of storage with additional features. It is provided as a Technology Preview. With Stratis, you can perform the following storage tasks: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. For more information, see the Setting up Stratis file systems documentation. RHEL 8.5 updated Stratis to version 2.4.2. For more information, see the Stratis 2.4.2 Release Notes . Jira:RHELPLAN-1212 [1] NVMe/TCP host is available as a Technology Preview Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding nvme_tcp.ko kernel module has been added as a Technology Preview. The use of NVMe/TCP as a host is manageable with tools provided by the nvme-cli package. The NVMe/TCP host Technology Preview is included only for testing purposes and is not currently planned for full support. Bugzilla:1696451 [1] Setting up a Samba server on an IdM domain member is provided as a Technology Preview With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the /etc/samba/smb.conf with the ID mapping configuration for the sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member. Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients. For details, see Setting up Samba on an IdM domain member . Jira:RHELPLAN-13195 [1] 9.5. High availability and clusters Pacemaker podman bundles available as a Technology Preview Pacemaker container bundles now run on Podman, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat OpenStack. Bugzilla:1619620 [1] Heuristics in corosync-qdevice available as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. Bugzilla:1784200 New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now provides the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. Bugzilla:1775847 [1] 9.6. Identity Management Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as a Technology Preview. Previously, the IdM API was enhanced to enable multiple versions of API commands. These enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers can use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . Bugzilla:1664719 DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now implement DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2 Secure Domain Name System (DNS) Deployment Guide DNSSEC Key Rollover Timing Considerations Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. Bugzilla:1664718 ACME available as a Technology Preview The Automated Certificate Management Environment (ACME) service is now available in Identity Management (IdM) as a Technology Preview. ACME is a protocol for automated identifier validation and certificate issuance. Its goal is to improve security by reducing certificate lifetimes and avoiding manual processes from certificate lifecycle management. In RHEL, the ACME service uses the Red Hat Certificate System (RHCS) PKI ACME responder. The RHCS ACME subsystem is automatically deployed on every certificate authority (CA) server in the IdM deployment, but it does not service requests until the administrator enables it. RHCS uses the acmeIPAServerCert profile when issuing ACME certificates. The validity period of issued certificates is 90 days. Enabling or disabling the ACME service affects the entire IdM deployment. Important It is recommended to enable ACME only in an IdM deployment where all servers are running RHEL 8.4 or later. Earlier RHEL versions do not include the ACME service, which can cause problems in mixed-version deployments. For example, a CA server without ACME can cause client connections to fail, because it uses a different DNS Subject Alternative Name (SAN). Warning Currently, RHCS does not remove expired certificates. Because ACME certificates expire after 90 days, the expired certificates can accumulate and this can affect performance. To enable ACME across the whole IdM deployment, use the ipa-acme-manage enable command: To disable ACME across the whole IdM deployment, use the ipa-acme-manage disable command: To check whether the ACME service is installed and if it is enabled or disabled, use the ipa-acme-manage status command: Bugzilla:1628987 [1] sssd-idp sub-package available as a Technology Preview The sssd-idp sub-package for SSSD contains the oidc_child and krb5 idp plugins, which are client-side components that perform OAuth2 authentication against Identity Management (IdM) servers. This feature is available only with IdM servers on RHEL 8.7 and later. Bugzilla:2065692 SSSD internal krb5 idp plugin available as a Technology Preview The SSSD krb5 idp plugin allows you to authenticate against an external identity provider (IdP) using the OAuth2 protocol. This feature is available only with IdM servers on RHEL 8.7 and later. Bugzilla:2056483 RHEL IdM allows delegating user authentication to external identity providers as a Technology Preview As a Technology Preview in RHEL IdM, you can now associate users with external identity providers (IdP) that support the OAuth 2 device authorization flow. When these users authenticate with the SSSD version available in RHEL 8.7 or later, they receive RHEL IdM single sign-on capabilities with Kerberos tickets after performing authentication and authorization at the external IdP. Notable features include: Adding, modifying, and deleting references to external IdPs with ipa idp-* commands Enabling IdP authentication for users with the ipa user-mod --user-auth-type=idp command For additional information, see Using external identity providers to authenticate to IdM . Bugzilla:2101770 9.7. Desktop GNOME for the 64-bit ARM architecture available as a Technology Preview The GNOME desktop environment is available for the 64-bit ARM architecture as a Technology Preview. You can now connect to the desktop session on a 64-bit ARM server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on 64-bit ARM. For example: The Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27394 [1] , Bugzilla:1667225, Bugzilla:1724302 , Bugzilla:1667516 GNOME for the IBM Z architecture available as a Technology Preview The GNOME desktop environment is available for the IBM Z architecture as a Technology Preview. You can now connect to the desktop session on an IBM Z server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on IBM Z. For example: The Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27737 [1] 9.8. Graphics infrastructures VNC remote console available as a Technology Preview for the 64-bit ARM architecture On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture. Bugzilla:1698565 [1] 9.9. Virtualization KVM virtualization is usable in RHEL 8 Hyper-V virtual machines As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host. Note that currently, this feature only works on Intel and AMD systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization Bugzilla:1519039 [1] AMD SEV and SEV-ES for KVM virtual machines As a Technology Preview, RHEL 8 provides the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts the VM's memory to protect the VM from access by the host. This increases the security of the VM. In addition, the enhanced Encrypted State version of SEV (SEV-ES) is also provided as Technology Preview. SEV-ES encrypts all CPU register contents when a VM stops running. This prevents the host from modifying the VM's CPU registers or reading any information from them. Note that SEV and SEV-ES work only on the 2nd generation of AMD EPYC CPUs (codenamed Rome) or later. Also note that RHEL 8 includes SEV and SEV-ES encryption, but not the SEV and SEV-ES security attestation. Bugzilla:1501618 [1] , Jira:RHELPLAN-7677, Bugzilla:1501607 Intel vGPU available as a Technology Preview As a Technology Preview, it is possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, it is possible to enable a VNC console operated by Intel vGPU. By enabling it, users can connect to a VNC console of the VM and see the VM's desktop hosted by Intel vGPU. However, this currently only works for RHEL guest operating systems. Note that this feature is deprecated and will be removed entirely in a future RHEL major release. Bugzilla:1528684 [1] Creating nested virtual machines Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on Intel, AMD64, IBM POWER, and IBM Z systems hosts with RHEL 8. With this feature, a RHEL 7 or RHEL 8 VM that runs on a physical RHEL 8 host can act as a hypervisor, and host its own VMs. Jira:RHELPLAN-14047 [1] , Jira:RHELPLAN-24437 Technology Preview: Select Intel network adapters now provide SR-IOV in RHEL guests on Hyper-V As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters that are supported by the ixgbevf and iavf drivers. This feature is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine The feature is currently provided with Microsoft Windows Server 2016 and later. Bugzilla:1348508 [1] Intel TDX in RHEL guests As a Technology Preview, the Intel Trust Domain Extension (TDX) feature can now be used in RHEL 8.8 and later guest operating systems. If the host system supports TDX, you can deploy hardware-isolated RHEL 9 virtual machines (VMs), called trust domains (TDs). Note, however, that TDX currently does not work with kdump , and enabling TDX will cause kdump to fail on the VM. Bugzilla:1836977 [1] Sharing files between hosts and VMs using virtiofs As a Technology Preview, RHEL 8 now provides the virtio file system ( virtiofs ). Using virtiofs , you can efficiently share files between your host system and its virtual machines (VM). Bugzilla:1741615 [1] 9.10. RHEL in cloud environments RHEL confidential VMs are now available on Azure as a Technology Preview With the updated RHEL kernel, you can now create and run confidential virtual machines (VMs) on Microsoft Azure as a Technology Preview. However, it is not yet possible to encrypt RHEL confidential VM images during boot on Azure. Jira:RHELPLAN-122316 [1] 9.11. Containers SQLite database backend for Podman is available as a Technology Preview Beginning with Podman v4.6, the SQLite database backend for Podman is available as a Technology Preview. To set the database backend to SQLite, add the database_backend = "sqlite" option in the /etc/containers/containers.conf configuration file. Run the podman system reset command to reset storage back to the initial state before you switch to the SQLite database backend. Note that you have to recreate all containers and pods. The SQLite database guarantees good stability and consistency. Other databases in the containers stack will be moved to SQLite as well. The BoltDB remains the default database backend. Jira:RHELPLAN-154428 [1] The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. Jira:RHELDOCS-16861 [1]
[ "tc filter add dev enp0s1 ingress protocol mpls_uc flower mpls lse depth 1 label 12323 lse depth 2 label 45832 action mpls dec_ttl pipe action mpls modify label 549386 pipe action pedit ex munge eth dst set 00:00:5E:00:53:01 pipe action pedit ex munge eth src set 00:00:5E:00:53:02 pipe action mirred egress redirect dev enp0s2", "xfs_info /mount-point | grep ftype", "ipa-acme-manage enable The ipa-acme-manage command was successful", "ipa-acme-manage disable The ipa-acme-manage command was successful", "ipa-acme-manage status ACME is enabled The ipa-acme-manage command was successful" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/technology-previews
Chapter 8. Creating infrastructure machine sets
Chapter 8. Creating infrastructure machine sets Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 8.2. Creating infrastructure machine sets for production environments In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.2.1. Creating infrastructure machine sets for different clouds Use the sample compute machine set for your cloud. 8.2.1.1. Sample YAML for a compute machine set custom resource on AWS The sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) Local Zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra 3 machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the infrastructure ID, infra role node label, and zone. 3 Specify the infra role node label. 4 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 5 Specify the zone name, for example, us-east-1a . 6 Specify the region, for example, us-east-1 . 7 Specify the infrastructure ID and zone. 8 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 9 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on AWS support non-guaranteed Spot Instances . You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file. 8.2.1.2. Sample YAML for a compute machine set custom resource on Azure This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and infra is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: "1" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 Specify the infra node label. 3 Specify the infrastructure ID, infra node label, and region. 4 Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Using the Azure Marketplace offering". 5 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 6 Specify the region to place machines on. 7 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 8 Specify the zone within your region to place machines on. Ensure that your region supports the zone that you specify. Important If your region supports availability zones, you must specify the zone. Specifying the zone avoids volume node affinity failure when a pod requires a persistent volume attachment. To do this, you can create a compute machine set for each zone in the same region. 9 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on Azure support non-guaranteed Spot VMs . You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file. Additional resources Using the Azure Marketplace offering 8.2.1.3. Sample YAML for a compute machine set custom resource on Azure Stack Hub This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: "1" 22 1 5 7 14 16 17 18 21 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 19 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 12 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 15 Specify the region to place machines on. 13 Specify the availability set for the cluster. 22 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. Note Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs. 8.2.1.4. Sample YAML for a compute machine set custom resource on IBM Cloud This sample YAML defines a compute machine set that runs in a specified IBM Cloud(R) zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The <infra> node label. 4 6 10 The infrastructure ID, <infra> node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud(R) instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 19 The taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.5. Sample YAML for a compute machine set custom resource on GCP This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" , where infra is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a Sample GCP MachineSet values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 For <infra> , specify the <infra> node label. 3 Specify the path to the image that is used in current compute machine sets. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 7 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on GCP support non-guaranteed preemptible VM instances . You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file. 8.2.1.6. Sample YAML for a compute machine set custom resource on Nutanix This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI ( oc ). Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> name: <infrastructure_id>-<infra>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 taints: 15 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 Specify the <infra> node label. 3 Specify the infrastructure ID, <infra> node label, and zone. 4 Annotations for the cluster autoscaler. 5 Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.17. 6 Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 7 Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. 8 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 9 Specify the amount of memory for the cluster in Gi. 10 Specify the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 11 Specify the size of the system disk in Gi. 12 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. 13 Specify the number of vCPU sockets. 14 Specify the number of vCPUs per socket. 15 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.7. Sample YAML for a compute machine set custom resource on RHOSP This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone> 1 5 7 14 16 17 18 19 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID and <infra> node label. 11 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 12 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 13 Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. 15 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 8.2.1.8. Sample YAML for a compute machine set custom resource on vSphere This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and <infra> node label. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 11 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 12 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 13 Specify the vCenter data center to deploy the compute machine set on. 14 Specify the vCenter datastore to deploy the compute machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Specify the vSphere resource pool for your VMs. 17 Specify the vCenter server IP or fully qualified domain name. 8.2.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 8.2.3. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets 8.2.4. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Important Creating a custom machine configuration pool overrides default worker pool configurations if they refer to the same file or unit. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 8.3. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control. 8.3.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add the taint with NoExecute Effect along with the above taint with NoSchedule Effect: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has the key node-role.kubernetes.io/infra and taint effect NoExecute . Nodes with the NoExecute effect schedule only pods that tolerate the taint. The effect will remove any existing pods from the node that do not have a matching toleration. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the value of the key-value pair taint that you added to the node. 4 Specify the effect that you added to the node. 5 Specify the key that you added to the node. 6 Specify the Equal Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 7 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes. See Understanding taints and tolerations for more details about different effects of taints. 8.4. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown: spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 8.4.1. Moving the router You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.30.3 Because the role list includes infra , the pod is running on the correct node. 8.4.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 8.4.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 8.4.4. Moving the Vertical Pod Autoscaler Operator components The Vertical Pod Autoscaler Operator (VPA) consists of three components: the recommender, updater, and admission controller. The Operator and each component has its own pod in the VPA namespace on the control plane nodes. You can move the VPA Operator and component pods to infrastructure nodes by adding a node selector to the VPA subscription and the VerticalPodAutoscalerController CR. The following example shows the default deployment of the VPA pods to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none> Procedure Move the VPA Operator pod by adding a node selector to the Subscription custom resource (CR) for the VPA Operator: Edit the CR: USD oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler Add a node selector to match the node role label on the infra node: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/infra: "" 1 1 Specifies the node role of an infra node. Note If the infra node uses taints, you need to add a toleration to the Subscription CR. For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 1 - key: "node-role.kubernetes.io/infra" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for a taint on the infra node. Move each VPA component by adding node selectors to the VerticalPodAutoscaler custom resource (CR): Edit the CR: USD oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler Add node selectors to match the node role label on the infra node: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" 3 1 Optional: Specifies the node role for the VPA admission pod. 2 Optional: Specifies the node role for the VPA recommender pod. 3 Optional: Specifies the node role for the VPA updater pod. Note If a target node uses taints, you need to add a toleration to the VerticalPodAutoscalerController CR. For example: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 1 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 2 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 3 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for the admission controller pod for a taint on the infra node. 2 Specifies a toleration for the recommender pod for a taint on the infra node. 3 Specifies a toleration for the updater pod for a taint on the infra node. Verification You can verify the pods have moved by using the following command: USD oc get pods -n openshift-vertical-pod-autoscaler -o wide The pods are no longer deployed to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> 8.4.5. Moving the Cluster Resource Override Operator pods By default, the Cluster Resource Override Operator installation process creates an Operator pod and two Cluster Resource Override pods on nodes in the clusterresourceoverride-operator namespace. You can move these pods to other nodes, such as infrastructure nodes, as needed. The following examples shows the Cluster Resource Override pods are deployed to control plane nodes and the Cluster Resource Override Operator pod is deployed to a worker node. Example Cluster Resource Override pods NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.128.2.32 ip-10-0-14-183.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.130.2.10 ip-10-0-20-140.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.131.0.33 ip-10-0-2-39.us-west-2.compute.internal <none> <none> Example node list NAME STATUS ROLES AGE VERSION ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.30.4 ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.30.4 ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.30.4 ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.30.4 ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.30.4 ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.30.4 Procedure Move the Cluster Resource Override Operator pod by adding a node selector to the Subscription custom resource (CR) for the Cluster Resource Override Operator. Edit the CR: USD oc edit -n clusterresourceoverride-operator subscriptions.operators.coreos.com clusterresourceoverride Add a node selector to match the node role label on the node where you want to install the Cluster Resource Override Operator pod: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator # ... spec: config: nodeSelector: node-role.kubernetes.io/infra: "" 1 # ... 1 Specify the role of the node where you want to deploy the Cluster Resource Override Operator pod. Note If the infra node uses taints, you need to add a toleration to the Subscription CR. For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator # ... spec: config: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 1 - key: "node-role.kubernetes.io/infra" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for a taint on the infra node. Move the Cluster Resource Override pods by adding a node selector to the ClusterResourceOverride custom resource (CR): Edit the CR: USD oc edit ClusterResourceOverride cluster -n clusterresourceoverride-operator Add a node selector to match the node role label on the infra node: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster resourceVersion: "37952" spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 deploymentOverrides: replicas: 1 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 # ... 1 Optional: Specify the number of Cluster Resource Override pods to deploy. The default is 2 . Only one pod is allowed per node. 2 Optional: Specify the role of the node where you want to deploy the Cluster Resource Override pods. Note If the infra node uses taints, you need to add a toleration to the ClusterResourceOverride CR. For example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster # ... spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 deploymentOverrides: replicas: 3 nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 1 - key: "key" operator: "Equal" value: "value" effect: "NoSchedule" 1 Specifies a toleration for a taint on the infra node. Verification You can verify that the pods have moved by using the following command: USD oc get pods -n clusterresourceoverride-operator -o wide The Cluster Resource Override pods are now deployed to the infra nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.127.2.25 ip-10-0-23-244.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.128.0.80 ip-10-0-24-233.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.129.0.71 ip-10-0-67-453.us-west-2.compute.internal <none> <none> Additional resources Moving monitoring components to different nodes
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra 3 machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> name: <infrastructure_id>-<infra>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 taints: 15 - key: node-role.kubernetes.io/infra effect: NoSchedule", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7", "spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.30.3", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>", "oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 3", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"", "oc get pods -n openshift-vertical-pod-autoscaler -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.128.2.32 ip-10-0-14-183.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.130.2.10 ip-10-0-20-140.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.131.0.33 ip-10-0-2-39.us-west-2.compute.internal <none> <none>", "NAME STATUS ROLES AGE VERSION ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.30.4 ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.30.4 ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.30.4 ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.30.4 ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.30.4 ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.30.4", "oc edit -n clusterresourceoverride-operator subscriptions.operators.coreos.com clusterresourceoverride", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit ClusterResourceOverride cluster -n clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster resourceVersion: \"37952\" spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 deploymentOverrides: replicas: 1 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 deploymentOverrides: replicas: 3 nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"value\" effect: \"NoSchedule\"", "oc get pods -n clusterresourceoverride-operator -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.127.2.25 ip-10-0-23-244.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.128.0.80 ip-10-0-24-233.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.129.0.71 ip-10-0-67-453.us-west-2.compute.internal <none> <none>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_management/creating-infrastructure-machinesets
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 6-9.2 Mon 3 April 2017 Robert Kratky Version for 6.9 GA release Revision 2-32 Sun Jun 26 2016 Mirek Jahoda Async release with fixes Revision 2-30 Wed May 3 2016 Robert Kratky Version for 6.8 GA release Revision 2-23 Tue Feb 17 2013 Jacquelynn East Version for 6.4 GA release Revision 2-22 Tue May 17 2011 Jacquelynn East BZ#598956 Revision 2-18 Mon May 16 2011 Jacquelynn East BZ#698956 Revision 2-15 Mon Nov 29 2010 Michael Hideo-Smith Initialized Revision 2-14 Thu Nov 18 2010 Jacquelynn East Minor edits Revision 2-13 Thu Nov 18 2010 Jacquelynn East Minor edits
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/appe-publican-revision_history
Chapter 16. Deploying and using the Red Hat build of OptaPlanner vehicle route planning starter application
Chapter 16. Deploying and using the Red Hat build of OptaPlanner vehicle route planning starter application As a developer, you can use the OptaWeb Vehicle Routing starter application to optimize your vehicle fleet deliveries. Prerequisites OpenJDK (JDK) 11 is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. 16.1. What is OptaWeb Vehicle Routing? The main purpose of many businesses is to transport various types of cargo. The goal of these businesses is to deliver a piece of cargo from the loading point to a destination and use its vehicle fleet in the most efficient way. One of the main objectives is to minimize travel costs which are measured in either time or distance. This type of optimization problem is referred to as the vehicle routing problem (VRP) and has many variations. Red Hat build of OptaPlanner can solve many of these vehicle routing variations and provides solution examples. OptaPlanner enables developers to focus on modeling business rules and requirements instead of learning constraint programming theory. OptaWeb Vehicle Routing expands the vehicle routing capabilities of OptaPlanner by providing a starter application that answers questions such as these: Where do I get the distances and travel times? How do I visualize the solution on a map? How do I build an application that runs in the cloud? OptaWeb Vehicle Routing uses OpenStreetMap (OSM) data files. For information about OpenStreetMap, see the OpenStreetMap web site. Use the following definitions when working with OptaWeb Vehicle Routing: Region : An arbitrary area on the map of Earth, represented by an OSM file. A region can be a country, a city, a continent, or a group of countries that are frequently used together. For example, the DACH region includes Germany (DE), Austria (AT), and Switzerland (CH). Country code : A two-letter code assigned to a country by the ISO-3166 standard. You can use a country code to filter geosearch results. Because you can work with a region that spans multiple countries (for example, the DACH region), OptaWeb Vehicle Routing accepts a list of country codes so that geosearch filtering can be used with such regions. For a list of country codes, see ISO 3166 Country Codes Geosearch : A type of query where you provide an address or a place name of a region as the search keyword and receive a number of GPS locations as a result. The number of locations returned depends on how unique the search keyword is. Because most place names are not unique, filter out nonrelevant results by including only places in the country or countries that are in your working region. 16.2. Download and build the OptaWeb Vehicle Routing deployment files You must download and prepare the deployment files before building and deploying OptaWeb Vehicle Routing. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts ( rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip ). Extract the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file. Download Red Hat Decision Manager 7.13 Maven Repository Kogito and OptaPlanner 8 Maven Repository ( rhpam-7.13.5-kogito-maven-repository.zip ). Extract the rhpam-7.13.5-kogito-maven-repository.zip file. Copy the contents of the rhpam-7.13.5-kogito-maven-repository/maven-repository subdirectory into the ~/.m2/repository directory. Navigate to the optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing directory. Enter the following command to build OptaWeb Vehicle Routing: 16.3. Run OptaWeb Vehicle Routing locally using the runLocally.sh script Linux users can use the runLocally.sh Bash script to run OptaWeb Vehicle Routing. Note The runLocally.sh script does not run on macOS. If you cannot use the runLocally.sh script, see Section 16.4, "Configure and run OptaWeb Vehicle Routing manually" . The runLocally.sh script automates the following setup steps that otherwise must be carried out manually: Create the data directory. Download selected OpenStreetMap (OSM) files from Geofabrik. Try to associate a country code with each downloaded OSM file automatically. Build the project if the standalone JAR file does not exist. Launch OptaWeb Vehicle Routing by taking a single region argument or by selecting the region interactively. See the following sections for instructions about executing the runLocally.sh script: Section 16.3.1, "Run the OptaWeb Vehicle Routing runLocally.sh script in quick start mode" Section 16.3.2, "Run the OptaWeb Vehicle Routing runLocally.sh script in interactive mode" Section 16.3.3, "Run the OptaWeb Vehicle Routing runLocally.sh script in non-interactive mode" 16.3.1. Run the OptaWeb Vehicle Routing runLocally.sh script in quick start mode The easiest way to get started with OptaWeb Vehicle Routing is to run the runLocally.sh script without any arguments. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Internet access is available. Procedure Enter the following command in the rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing directory. If prompted to create the .optaweb-vehicle-routing directory, enter y . You are prompted to create this directory the first time you run the script. If prompted to download an OSM file, enter y . The first time that you run the script, OptaWeb Vehicle Routing downloads the Belgium OSM file. The application starts after the OSM file is downloaded. To open the OptaWeb Vehicle Routing user interface, enter the following URL in a web browser: Note The first time that you run the script, it will take a few minutes to start because the OSM file must be imported by GraphHopper and stored as a road network graph. The time you run the runlocally.sh script, load times will be significantly faster. steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.3.2. Run the OptaWeb Vehicle Routing runLocally.sh script in interactive mode Use interactive mode to see the list of downloaded OSM files and country codes assigned to each region. You can use the interactive mode to download additional OSM files from Geofabrik without visiting the website and choosing a destination for the download. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Internet access is available. Procedure Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing . Enter the following command to run the script in interactive mode: At the Your choice prompt, enter d to display the download menu. A list of previously downloaded regions appears followed by a list of regions that you can download. Optional: Select a region from the list of previously downloaded regions: Enter the number associated with a region in the list of downloaded regions. Press the Enter key. Optional: Download a region: Enter the number associated with the region that you want to download. For example, to select the map of Europe, enter 5 . To download the map, enter d then press the Enter key. To download a specific region within the map, enter e then enter the number associated with the region that you want to download, and press the Enter key. Using large OSM files For the best user experience, use smaller regions such as individual European or US states. Using OSM files larger than 1 GB will require significant RAM size and take a lot of time (up to several hours) for the initial processing. The application starts after the OSM file is downloaded. To open the OptaWeb Vehicle Routing user interface, enter the following URL in a web browser: steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.3.3. Run the OptaWeb Vehicle Routing runLocally.sh script in non-interactive mode Use OptaWeb Vehicle Routing in non-interactive mode to start OptaWeb Vehicle Routing with a single command that includes an OSM file that you downloaded previously. This is useful when you want to switch between regions quickly or when doing a demo. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . The OSM file for the region that you want to use has been downloaded. For information about downloading OSM files, see Section 16.3.2, "Run the OptaWeb Vehicle Routing runLocally.sh script in interactive mode" . Internet access is available. Procedure Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing . Execute the following command where <OSM_FILE_NAME> is an OSM file that you downloaded previously: steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.3.4. Update the data directory You can update the data directory that OptaWeb Vehicle Routing uses if you want to use a different data directory. The default data directory is USDHOME/.optaweb-vehicle-routing . Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Procedure To use a different data directory, add the directory's absolute path to the .DATA_DIR_LAST file in the current data directory. To change country codes associated with a region, edit the corresponding file in the country_codes directory, in the current data directory. For example, if you downloaded an OSM file for Scotland and the script fails to guess the country code, set the content of country_codes/scotland-latest to GB. To remove a region, delete the corresponding OSM file from openstreetmap directory in the data directory and delete the region's directory in the graphhopper directory. 16.4. Configure and run OptaWeb Vehicle Routing manually The easiest way to run OptaWeb Vehicle Routing is to use the runlocally.sh script. However, if Bash is not available on your system you can manually complete the steps that the runlocally.sh script performs. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Internet access is available. Procedure Download routing data. The routing engine requires geographical data to calculate the time it takes vehicles to travel between locations. You must download and store OpenStreetMap (OSM) data files on the local file system before you run OptaWeb Vehicle Routing. Note The OSM data files are typically between 100 MB to 1 GB and take time to download so it is a good idea to download the files before building or starting the OptaWeb Vehicle Routing application. Open http://download.geofabrik.de/ in a web browser. Click a region in the Sub Region list, for example Europe . The subregion page opens. In the Sub Regions table, download the OSM file ( .osm.pbf ) for a country, for example Belgium. Create the data directory structure. OptaWeb Vehicle Routing reads and writes several types of data on the file system. It reads OSM (OpenStreetMap) files from the openstreetmap directory, writes a road network graph to the graphhopper directory, and persists user data in a directory called db . Create a new directory dedicated to storing all of these data to make it easier to upgrade to a newer version of OptaWeb Vehicle Routing in the future and continue working with the data you created previously. Create the USDHOME/.optaweb-vehicle-routing directory. Create the openstreetmap directory in the USDHOME/.optaweb-vehicle-routing directory: Move all of your downloaded OSM files (files with the extension .osm.pbf ) to the openstreetmap directory. The rest of the directory structure is created by the OptaWeb Vehicle Routing application when it runs for the first time. After that, your directory structure is similar to the following example: Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing/optaweb-vehicle-routing-standalone/target . To run OptaWeb Vehicle Routing, enter the following command: In this command, replace the following variables: <OSM_FILE_NAME> : The OSM file for the region that you want to use and that you downloaded previously <COUNTRY_CODE_LIST> : A comma-separated list of country codes used to filter geosearch queries. For a list of country codes, see ISO 3166 Country Codes . The application starts after the OSM file is downloaded. In the following example, OptaWeb Vehicle Routing downloads the OSM map of Central America ( central-america-latest.osm.pbf ) and searches in the countries Belize (BZ) and Guatemala (GT). To open the OptaWeb Vehicle Routing user interface, enter the following URL in a web browser: steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.5. Run OptaWeb Vehicle Routing on Red Hat OpenShift Container Platform Linux users can use the runOnOpenShift.sh Bash script to install OptaWeb Vehicle Routing on Red Hat OpenShift Container Platform. Note The runOnOpenShift.sh script does not run on macOS. Prerequisites You have access to an OpenShift cluster and the OpenShift command-line interface ( oc ) has been installed. For information about Red Hat OpenShift Container Platform, see Installing OpenShift Container Platform . OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Internet access is available. Procedure Log in to or start a Red Hat OpenShift Container Platform cluster. Enter the following command where <PROJECT_NAME> is the name of your new project: If necessary, change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing . Enter the following command to execute the runOnOpenShift.sh script and download an OpenStreetMap (OSM) file: In this command, replace the following variables: <OSM_FILE_NAME> : The name of a file downloaded from <OSM_FILE_DOWNLOAD_URL> . <COUNTRY_CODE_LIST> : A comma-separated list of country codes used to filter geosearch queries. For a list of country codes, see ISO 3166 Country Codes . <OSM_FILE_DOWNLOAD_URL> : The URL of an OSM data file in PBF format accessible from OpenShift. The file will be downloaded during backend startup and saved as /deployments/local/<OSM_FILE_NAME> . In the following example, OptaWeb Vehicle Routing downloads the OSM map of Central America ( central-america-latest.osm.pbf ) and searches in the countries Belize (BZ) and Guatemala (GT). Note For help with the runOnOpenShift.sh script, enter ./runOnOpenShift.sh --help . 16.5.1. Updating the deployed OptaWeb Vehicle Routing application with local changes After you deploy your OptaWeb Vehicle Routing application on Red Hat OpenShift Container Platform, you can update the back end and front end. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven and deployed on OpenShift. Procedure To update the back end, perform the following steps: Change the source code and build the back-end module with Maven. Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing . Enter the following command to start the OpenShift build: oc start-build backend --from-dir=. --follow To update the front end, perform the following steps: Change the source code and build the front-end module with the npm utility. Change directory to sources/optaweb-vehicle-routing-frontend . Enter the following command to start the OpenShift build: oc start-build frontend --from-dir=docker --follow steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.6. Using OptaWeb Vehicle Routing In the OptaWeb Vehicle Routing application, you can mark a number of locations on the map. The first location is assumed to be the depot. Vehicles must deliver goods from this depot to every other location that you marked. You can set the number of vehicles and the carrying capacity of every vehicle. However, the route is not guaranteed to use all vehicles. The application uses as many vehicles as required for an optimal route. The current version has certain limitations: Every delivery to a location is supposed to take one point of vehicle capacity. For example, a vehicle with a capacity of 10 can visit up to 10 locations before returning to the depot. Setting custom names of vehicles and locations is not supported. 16.6.1. Creating a route To create an optimal route, use the Demo tab of the OptaWeb Vehicle Routing user interface. Prerequisites OptaWeb Vehicle Routing is running and you have access to the user interface. Procedure In OptaWeb Vehicle Routing, click Demo to open the Demo tab. Use the blue minus and plus buttons above the map to set the number of vehicles. Each vehicle has a default capacity of 10. Use the plus button in a square on the map to zoom in as required. Note Do not double-click to zoom in. A double click also creates a location. Click a location for the depot. Click other locations on the map for delivery points. If you want to delete a location: Hover the mouse cursor over the location to see the location name. Find the location name in the list in the left part of the screen. Click the X icon to the name. Every time you add or remove a location or change the number of vehicles, the application creates and displays a new optimal route. If the solution uses several vehicles, the application shows the route for every vehicle in a different color. 16.6.2. Viewing and setting other details You can use other tabs in the OptaWeb Vehicle Routing user interface to view and set additional details. Prerequisites OptaWeb Vehicle Routing is running and you have access to the user interface. Procedure Click the Vehicles tab to view, add, and remove vehicles, and also set the capacity for every vehicle. Click the Visits tab to view and remove locations. Click the Route tab to select each vehicle and view the route for the selected vehicle. 16.6.3. Creating custom data sets with OptaWeb Vehicle Routing There is a built-in demo data set consisting of a several large Belgian cities. If you want to have more demos available in the Load demo menu, you can prepare your own data sets. Procedure In OptaWeb Vehicle Routing, add a depot and one or more of visits by clicking on the map or using geosearch. Click Export and save the file in the data set directory. Note The data set directory is the directory specified in the app.demo.data-set-dir property. If the application is running through the runLocally.sh script, the data set directory is set to USDHOME/.optaweb-vehicle-routing/dataset . Otherwise, the property is taken from the application.properties file and defaults to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing/optaweb-vehicle-routing-standalone/target/local/dataset . You can edit the app.demo.data-set-dir property to specify a diffent data directory. Edit the YAML file and choose a unique name for the data set. Restart the back end. After you restart the back end, files in the data set directory appear in the Load demo menu. 16.6.4. Troubleshooting OptaWeb Vehicle Routing If the OptaWeb Vehicle Routing behaves unexpectedly, follow this procedure to trouble-shoot. Prerequisites OptaWeb Vehicle Routing is running and behaving unexpectedly. Procedure To identify issues, review the back-end terminal output log. To resolve issues, remove the back-end database: Stop the back end by pressing Ctrl+C in the back-end terminal window. Remove the optaweb-vehicle-routing/optaweb-vehicle-routing-backend/local/db directory. Restart OptaWeb Vehicle Routing. 16.7. OptaWeb Vehicle Routing development guide This section describes how to configure and run the back-end and front-end modules in development mode. 16.7.1. OptaWeb Vehicle Routing project structure The OptaWeb Vehicle Routing project is a multi-module Maven project. Figure 16.1. Module dependency tree diagram The back-end and front-end modules are at the bottom of the module tree. These modules contain the application source code. The standalone module is an assembly module that combines the back end and front end into a single executable JAR file. The distribution module represents the final assembly step. It takes the standalone application and the documentation and wraps them in an archive that is easy to distribute. The back end and front end are separate projects that you can build and deploy separately. In fact, they are written in completely different languages and built with different tools. Both projects have tools that provide a modern developer experience with fast turn-around between code changes and the running application. The sections describe how to run both back-end and front-end projects in development mode. 16.7.2. The OptaWeb Vehicle Routing back-end module The back-end module contains a server-side application that uses Red Hat build of OptaPlanner to optimize vehicle routes. Optimization is a CPU-intensive computation that must avoid any I/O operations in order to perform to its full potential. Because one of the chief objectives is to minimize travel cost, either time or distance, OptaWeb Vehicle Routing keeps the travel cost information in RAM memory. While solving, OptaPlanner needs to know the travel cost between every pair of locations entered by the user. This information is stored in a structure called the distance matrix . When you enter a new location, OptaWeb Vehicle Routing calculates the travel cost between the new location and every other location that has been entered so far, and stores the travel cost in the distance matrix. The travel cost calculation is performed by the GraphHopper routing engine. The back-end module implements the following additional functionality: Persistence WebSocket connection for the front end Data set loading, export, and import To learn more about the back-end code architecture, see Section 16.8, "OptaWeb Vehicle Routing back-end architecture" . The sections describe how to configure and run the back end in development mode. 16.7.2.1. Running the OptaWeb Vehicle Routing back-end module You can run the back-end module in Quarkus development mode. Prerequisites OptaWeb Vehicle Routing has been configured as described in Section 16.4, "Configure and run OptaWeb Vehicle Routing manually" . Procedure Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing/optaweb-vehicle-routing-backend . To run the back end in development mode, enter the following command: mvn compile quarkus:dev 16.7.2.2. Running the OptaWeb Vehicle Routing back-end module from IntelliJ IDEA Ultimate You can use IntelliJ IDEA Ulitmate to run the OptaWeb Vehicle Routing back-end module to make it easier to develop your project. IntelliJ IDEA Ultimate includes a Quarkus plug-in that automatically creates run configurations for modules that use the Quarkus framework. Procedure Use the optaweb-vehicle-routing-backend run configuration to run the back end. Additional resources For more information, see Run the Quarkus application . 16.7.2.3. Quarkus development mode In development mode, if there are changes to the back-end source code or configuration and you refresh the browser tab where the front end runs, the back-end automatically restarts. Learn more about Quarkus development mode . 16.7.2.4. Changing OptaWeb Vehicle Routing back-end module system property values You can temporarily or permanently override the default system property values of the OptaWeb Vehicle Routing back-end module. The OptaWeb Vehicle Routing back-end module system properties are stored in the /src/main/resources/application.properties file. This file is under version control. Use it to permanently store default configuration property values and to define Quarkus profiles. Prerequisites The OptaWeb Vehicle Routing starter application has been downloaded and extracted. For information, see Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Procedure To temporarily override a default system property value, include the -D<PROPERTY>=<VALUE> argument when you run the mvn or java command, where <PROPERTY> is the name of the property that you want to change and <VALUE> is the value that you want to temporarily assign to that property. The following example shows how to temporarily change the value of the quarkus.http.port system property to 8181 when you use Maven to compile a Quarkus project in dev mode: This temporarily changes the value of the property stored in the /src/main/resources/application.properties file. To change a configuration value permanently, for example to store a configuration that is specific to your development environment, copy the contents of the env-example file to the optaweb-vehicle-routing-backend/.env file. This file is excluded from version control and therefore it does not exist when you clone the repository. You can make changes in the .env file without affecting the Git working tree. Additional resources For a complete list of OptaWeb Vehicle Routing configuration properties, see Section 16.9, "OptaWeb Vehicle Routing back-end configuration properties" . 16.7.2.5. OptaWeb Vehicle Routing backend logging OptaWeb Vehicle Routing uses the SLF4J API and Logback as the logging framework. For more information, see Quarkus - Configuring Logging . 16.7.3. Working with the OptaWeb Vehicle Routing front-end module The front-end project was bootstrapped with Create React App . Create React App provides a number of scripts and dependencies that help with development and with building the application for production. Prerequisites The OptaWeb Vehicle Routing starter application has been downloaded and extracted. For information, see Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Procedure On Fedora, enter the following command to set up the development environment: sudo dnf install npm See Downloading and installing Node.js and npm for more information about installing npm. Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing/optaweb-vehicle-routing-frontend . Install npm dependencies: npm install Unlike Maven, the npm package manager installs dependencies in node_modules under the project directory and does that only when you execute npm install . Whenever the dependencies listed in package.json change, for example when you pull changes to the master branch, you must execute npm install before you run the development server. Enter the following command to run the development server: npm start If it does not open automatically, open http://localhost:3000/ in a web browser. By default, the npm start command attempts to open this URL in your default browser. Note If you do not want the npm start command to open a new browser tab each time you run it, export the BROWSER=none environment variable. You can use .env.local file to make this preference permanent. To do that, enter the following command: echo BROWSER=none >> .env.local The browser refreshes the page whenever you make changes in the front-end source code. The development server process running in the terminal picks up the changes as well and prints compilation and lint errors to the console. Enter the following command to run tests: Change the value of the REACT_APP_BACKEND_URL environment variable to specify the location of the back-end project to be used by npm when you execute npm start or npm run build , for example: Note Environment variables are hard coded inside the JavaScript bundle during the npm build process, so you must specify the back-end location before you build and deploy the front end. To learn more about the React environment variables, see Adding Custom Environment Variables . To build the front end, enter one of the following commands: 16.8. OptaWeb Vehicle Routing back-end architecture Domain model and use cases are essential for the application. The OptaWeb Vehicle Routing domain model is at the center of the architecture and is surround by the application layer that embeds use cases. Functions such as route optimization, distance calculation, persistence, and network communication are considered implementation details and are placed at the outermost layer of the architecture. Figure 16.2. Diagram of application layers 16.8.1. Code organization The back-end code is organized in three layers, illustrated in the preceding graphic. The service package contains the application layer that implements use cases. The plugin package contains the infrastructure layer. Code in each layer is further organized by function. This means that each service or plug-in has its own package. 16.8.2. Dependency rules Compile-time dependencies are only allowed to point from outer layers towards the center. Following this rule helps to keep the domain model independent of underlying frameworks and other implementation details and models the behavior of business entities more precisely. With presentation and persistence being pushed out to the periphery, it is easier to test the behavior of business entities and use cases. The domain has no dependencies. Services only depend on the domain. If a service needs to send a result (for example to the database or to the client), it uses an output boundary interface. Its implementation is injected by the contexts and dependency injection (CDI) container. Plug-ins depend on services in two ways. First, they invoke services based on events such as a user input or a route update coming from the optimization engine. Services are injected into plug-ins which moves the burden of their construction and dependency resolution to the IoC container. Second, plug-ins implement service output boundary interfaces to handle use case results, for example persisting changes to the database or sending a response to the web UI. 16.8.3. The domain package The domain package contains business objects that model the domain of this project, for example Location , Vehicle , Route . These objects are strictly business-oriented and must not be influenced by any tools and frameworks, for example object-relational mapping tools and web service frameworks. 16.8.4. The service package The service package contains classes that implement use cases . A use case describes something that you want to do, for example adding a new location, changing vehicle capacity, or finding coordinates for an address. The business rules that govern use cases are expressed using the domain objects. Services often need to interact with plug-ins in the outer layer, such as persistence, web, and optimization. To satisfy the dependency rules between layers, the interaction between services and plug-ins is expressed in terms of interfaces that define the dependencies of a service. A plug-in can satisfy a dependency of a service by providing a bean that implements the boundary interface of the service. The CDI container creates an instance of the plug-in bean and injects it to the service at runtime. This is an example of the inversion of control principle. 16.8.5. The plugin package The plugin package contains infrastructure functions such as optimization, persistence, routing, and network. 16.9. OptaWeb Vehicle Routing back-end configuration properties You can set the OptaWeb Vehicle Routing application properties listed in the following table. Property Type Example Description app.demo.data-set-dir Relative or absolute path /home/user/.optaweb-vehicle-routing/dataset Custom data sets are loaded from this directory. Defaults to local/dataset . app.persistence.h2-dir Relative or absolute path /home/user/.optaweb-vehicle-routing/db The directory used by H2 to store the database file. Defaults to local/db . app.region.country-codes List of ISO 3166-1 alpha-2 country codes US , GB,IE , DE,AT,CH , may be empty Restricts geosearch results. app.routing.engine Enumeration air , graphhopper Routing engine implementation. Defaults to graphhopper . app.routing.gh-dir Relative or absolute path /home/user/.optaweb-vehicle-routing/graphhopper The directory used by GraphHopper to store road network graphs. Defaults to local/graphhopper . app.routing.osm-dir Relative or absolute path /home/user/.optaweb-vehicle-routing/openstreetmap The directory that contains OSM files. Defaults to local/openstreetmap . app.routing.osm-file File name belgium-latest.osm.pbf Name of the OSM file to be loaded by GraphHopper. The file must be placed under app.routing.osm-dir . optaplanner.solver.termination.spent-limit java.time.Duration 1m 150s P2dT21h ( PnDTnHnMn.nS ) How long the solver should run after a location change occurs. server.address IP address or hostname 10.0.0.123 , my-vrp.geo-1.openshiftapps.com Network address to which to bind the server. server.port Port number 4000 , 8081 Server HTTP port.
[ "mvn clean package -DskipTests", "./runLocally.sh", "http://localhost:8080", "./runLocally.sh -i", "http://localhost:8080", "./runLocally.sh <OSM_FILE_NAME>", "USDHOME/.optaweb-vehicle-routing └── openstreetmap", "USDHOME/.optaweb-vehicle-routing ├── db │ └── vrp.mv.db ├── graphhopper │ └── belgium-latest └── openstreetmap └── belgium-latest.osm.pbf", "java -Dapp.demo.data-set-dir=USDHOME/.optaweb-vehicle-routing/dataset -Dapp.persistence.h2-dir=USDHOME/.optaweb-vehicle-routing/db -Dapp.routing.gh-dir=USDHOME/.optaweb-vehicle-routing/graphhopper -Dapp.routing.osm-dir=USDHOME/.optaweb-vehicle-routing/openstreetmap -Dapp.routing.osm-file=<OSM_FILE_NAME> -Dapp.region.country-codes=<COUNTRY_CODE_LIST> -jar quarkus-app/quarkus-run.jar", "java -Dapp.demo.data-set-dir=USDHOME/.optaweb-vehicle-routing/dataset -Dapp.persistence.h2-dir=USDHOME/.optaweb-vehicle-routing/db -Dapp.routing.gh-dir=USDHOME/.optaweb-vehicle-routing/graphhopper -Dapp.routing.osm-dir=USDHOME/.optaweb-vehicle-routing/openstreetmap -Dapp.routing.osm-file=entral-america-latest.osm.pbf -Dapp.region.country-codes=BZ,GT -jar quarkus-app/quarkus-run.jar", "http://localhost:8080", "new-project <PROJECT_NAME>", "./runOnOpenShift.sh <OSM_FILE_NAME> <COUNTRY_CODE_LIST> <OSM_FILE_DOWNLOAD_URL>", "./runOnOpenShift.sh central-america-latest.osm.pbf BZ,GT http://download.geofabrik.de/europe/central-america-latest.osm.pbf", "start-build backend --from-dir=. --follow", "start-build frontend --from-dir=docker --follow", "mvn compile quarkus:dev", "mvn compile quarkus:dev -Dquarkus.http.port=8181", "sudo dnf install npm", "npm install", "npm start", "echo BROWSER=none >> .env.local", "npm test", "REACT_APP_BACKEND_URL=http://10.0.0.123:8081", "./mvnw install", "mvn install", "org.optaweb.vehiclerouting ├── domain ├── plugin # Infrastructure layer │ ├── persistence │ ├── planner │ ├── routing │ └── rest └── service # Application layer ├── demo ├── distance ├── error ├── location ├── region ├── reload ├── route └── vehicle" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/assembly-business-optimizer-vrp
Chapter 4. Using manual approval in OpenShift Pipelines
Chapter 4. Using manual approval in OpenShift Pipelines You can specify a manual approval task in a pipeline. When the pipeline reaches this task, it pauses and awaits approval from one or several OpenShift Container Platform users. If any of the users chooses to rejects the task instead of approving it, the pipeline fails. The manual approval gate controller provides this functionality. Important The manual approval gate is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 4.1. Enabling the manual approval gate controller To use manual approval tasks, you must first enable the manual approval gate controller. Prerequisites You installed the Red Hat OpenShift Pipelines Operator in your cluster. You are logged on to the cluster using the oc command-line utility. You have administrator permissions for the openshift-pipelines namespace. Procedure Create a file named manual-approval-gate-cr.yaml with the following manifest for the ManualApprovalGate custom resource (CR): apiVersion: operator.tekton.dev/v1alpha1 kind: ManualApprovalGate metadata: name: manual-approval-gate spec: targetNamespace: openshift-pipelines Apply the ManualApprovalGate CR by entering the following command: USD oc apply -f manual-approval-gate-cr.yaml Verify that the manual approval gate controller is running by entering the following command: USD oc get manualapprovalgates.operator.tekton.dev Example output NAME VERSION READY REASON manual-approval-gate v0.1.0 True Ensure that the READY status is True . If it is not True , wait for a few minutes and enter the command again. The controller might take some time to reach a ready state. 4.2. Specifying a manual approval task You can specify a manual approval task in your pipeline. When the execution of a pipeline run reaches this task, the pipeline run stops and awaits approval from one or several users. Prerequisites You enabled the manual approver gate controller. You created a YAML specification of a pipeline. Procedure Specify an ApprovalTask in the pipeline, as shown in the following example: apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: example-manual-approval-pipeline spec: tasks: # ... - name: example-manual-approval-task taskRef: apiVersion: openshift-pipelines.org/v1alpha1 kind: ApprovalTask params: - name: approvers value: - user1 - user2 - user3 - name: description value: Example manual approval task - please approve or reject - name: numberOfApprovalsRequired value: '2' - name: timeout value: '60m' # ... Table 4.1. Parameters for a manual approval task Parameter Type Description approvers array The OpenShift Container Platform users who can approve the task. description string Optional: The description of the approval task. OpenShift Pipelines displays the description to the user who can approve or reject the task. numberOfApprovalsRequired string The number of approvals from different users that the task requires. timeout string Optional: The timeout period for approval. If the task does not receive the configured number of approvals during this period, the pipeline run fails. The default timeout is 1 hour. 4.3. Approving a manual approval task When you run a pipeline that includes an approval task and the execution reaches the approval task, the pipeline run pauses and waits for user approval or rejection. Users can approve or reject the task by using either the web console or the opc command line utility. If any one of the approvers configured in the task rejects the task, the pipeline run fails. If one user approves the task but the configured number of approvals is still not reached, the same user can change to rejecting the task and the pipeline run fails 4.3.1. Approving a manual approval task by using the web console You can approve or reject a manual approval task by using the OpenShift Container Platform web console. If you are listed as an approver in a manual approval task and a pipeline run reaches this task, the web console displays a notification. You can view a list of tasks that require your approval and approve or reject these tasks. Prerequisites You enabled the OpenShift Pipelines console plugin. Procedure View a list of tasks that you can approve by completing one of the following actions: When a notification about a task requiring your approval displays, click Go to Approvals tab in this notification. In the Administrator perspective menu, select Pipelines Pipelines and then click the Approvals tab. In the Developer perspective menu, select Pipelines and then click the Approvals tab. In the PipelineRun details window, in the Details tab, click the rectangle that represents the manual approval task. The list displays only the approval for this task. In the PipelineRun details window, click the ApprovalTasks tab. The list displays only the approval for this pipeline run. In the list of approval tasks, in the line that represents the task that you want to approve, click the icon and then select one of the following options: To approve the task, select Approve . To reject the task, select Reject . Enter a message in the Reason field. Click Submit . Additional resources Enabling the OpenShift Pipelines console plugin 4.3.2. Approving a manual approval task by using the command line You can approve or reject a manual approval task by using the opc command-line utility. You can view a list of tasks for which you are an approver and approve or reject the tasks that are pending approval. Prerequisites You downloaded and installed the opc command-line utility. This utility is available in the same package as the tkn command-line utility. You are logged on to the cluster using the oc command-line utility. Procedure View a list of manual approval tasks for which you are listed as an approver by entering the following command: USD opc approvaltask list Example output NAME NumberOfApprovalsRequired PendingApprovals Rejected STATUS manual-approval-pipeline-01w6e1-task-2 2 0 0 Approved manual-approval-pipeline-6ywv82-task-2 2 2 0 Rejected manual-approval-pipeline-90gyki-task-2 2 2 0 Pending manual-approval-pipeline-jyrkb3-task-2 2 1 1 Rejected Optional: To view information about a manual approval task, including its name, namespace, pipeline run name, list of approvers, and current status, enter the following command: USD opc approvaltask describe <approval_task_name> Approve or reject a manual approval task as necessary: To approve a manual approval task, enter the following command: USD opc approvaltask approve <approval_task_name> Optionally, you can specify a message for the approval by using the -m parameter: USD opc approvaltask approve <approval_task_name> -m <message> To reject a manual approval task, enter the following command: USD opc approvaltask reject <approval_task_name> Optionally, you can specify a message for the rejection by using the -m parameter: USD opc approvaltask reject <approval_task_name> -m <message> Additional resources Installing tkn
[ "apiVersion: operator.tekton.dev/v1alpha1 kind: ManualApprovalGate metadata: name: manual-approval-gate spec: targetNamespace: openshift-pipelines", "oc apply -f manual-approval-gate-cr.yaml", "oc get manualapprovalgates.operator.tekton.dev", "NAME VERSION READY REASON manual-approval-gate v0.1.0 True", "apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: example-manual-approval-pipeline spec: tasks: - name: example-manual-approval-task taskRef: apiVersion: openshift-pipelines.org/v1alpha1 kind: ApprovalTask params: - name: approvers value: - user1 - user2 - user3 - name: description value: Example manual approval task - please approve or reject - name: numberOfApprovalsRequired value: '2' - name: timeout value: '60m'", "opc approvaltask list", "NAME NumberOfApprovalsRequired PendingApprovals Rejected STATUS manual-approval-pipeline-01w6e1-task-2 2 0 0 Approved manual-approval-pipeline-6ywv82-task-2 2 2 0 Rejected manual-approval-pipeline-90gyki-task-2 2 2 0 Pending manual-approval-pipeline-jyrkb3-task-2 2 1 1 Rejected", "opc approvaltask describe <approval_task_name>", "opc approvaltask approve <approval_task_name>", "opc approvaltask approve <approval_task_name> -m <message>", "opc approvaltask reject <approval_task_name>", "opc approvaltask reject <approval_task_name> -m <message>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/creating_cicd_pipelines/using-manual-approval
Chapter 12. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
Chapter 12. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates In OpenShift Container Platform version 4.15, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 12.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 12.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 12.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 12.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 12.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 12.3.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 12.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 12.3.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 12.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 12.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 12.4. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 12.4.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 12.4.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 12.4.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 12.3. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 12.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 12.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 12.6. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagInstanceProfile iam:TagRole Note If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 12.7. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 12.8. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketTagging s3:PutEncryptionConfiguration Example 12.9. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 12.10. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 12.11. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 12.12. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 12.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 12.14. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 12.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 12.16. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole 12.5. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. If you use the CloudFormation template to deploy your compute nodes, you must update the worker0.type.properties.ImageID parameter with the AMI ID value. 12.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 12.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 12.8. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 12.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 12.8.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . If you are installing a three-node cluster, modify the install-config.yaml file by setting the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 12.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.8.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 12.9. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 12.10. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 12.10.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 12.17. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.11. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 12.11.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 12.18. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.8" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.8" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . You can view details about your hosted zones by navigating to the AWS Route 53 console . See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 12.12. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 12.12.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 12.19. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.13. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 12.14. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 12.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-0493ec0f0a451f83b ap-east-1 ami-050a6d164705e7f62 ap-northeast-1 ami-00910c337e0f52cff ap-northeast-2 ami-07e98d33de2b93ac0 ap-northeast-3 ami-09bc0a599f4b3c483 ap-south-1 ami-0ba603a7f9d41228e ap-south-2 ami-03130aecb5d7459cc ap-southeast-1 ami-026c056e0a25e5a04 ap-southeast-2 ami-0d471f504ff6d9a0f ap-southeast-3 ami-0c1b9a0721cbb3291 ap-southeast-4 ami-0ef23bfe787efe11e ca-central-1 ami-0163965a05b75f976 eu-central-1 ami-01edb54011f870f0c eu-central-2 ami-0bc500d6056a3b104 eu-north-1 ami-0ab155e935177f16a eu-south-1 ami-051b4c06b21f5a328 eu-south-2 ami-096644e5555c23b19 eu-west-1 ami-0faeeeb3d2b1aa07c eu-west-2 ami-00bb1522dc71b604f eu-west-3 ami-01e5397bd2b795bd3 il-central-1 ami-0b32feb5d77c64e61 me-central-1 ami-0a5158a3e68ab7e88 me-south-1 ami-024864ad1b799dbba sa-east-1 ami-0c402ffb0c4b7edc0 us-east-1 ami-057df4d0cb8cbae0d us-east-2 ami-07566e5da1fd297f8 us-gov-east-1 ami-0fe03a7e289354670 us-gov-west-1 ami-06b7cc6445c5da732 us-west-1 ami-02d20001c5b9df1e9 us-west-2 ami-0dfba457127fba98c Table 12.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-06c7b4e42179544df ap-east-1 ami-07b6a37fa6d2d2e99 ap-northeast-1 ami-056d2eef4a3638246 ap-northeast-2 ami-0bd5a7684f0ff4e02 ap-northeast-3 ami-0fd08063da50de1da ap-south-1 ami-08f1ae2cef8f9690e ap-south-2 ami-020ba25cc1ec53b1c ap-southeast-1 ami-0020a1c0964ac8e48 ap-southeast-2 ami-07013a63289350c3c ap-southeast-3 ami-041d6ca1d57e3190f ap-southeast-4 ami-06539e9cbefc28702 ca-central-1 ami-0bb3991641f2b40f6 eu-central-1 ami-0908d117c26059e39 eu-central-2 ami-0e48c82ffbde67ed2 eu-north-1 ami-016614599b38d515e eu-south-1 ami-01b6cc1f0fd7b431f eu-south-2 ami-0687e1d98e55e402d eu-west-1 ami-0bf0b7b1cb052d68d eu-west-2 ami-0ba0bf567caa63731 eu-west-3 ami-0eab6a7956a66deda il-central-1 ami-03b3cb1f4869bf21d me-central-1 ami-0a6e1ade3c9e206a1 me-south-1 ami-0aa0775c68eac9f6f sa-east-1 ami-07235eee0bb930c78 us-east-1 ami-005808ca73e7b36ff us-east-2 ami-0c5c9420f6b992e9e us-gov-east-1 ami-08c9b2b8d578caf92 us-gov-west-1 ami-0bdff65422ba7d95d us-west-1 ami-017ad4dd030a04233 us-west-2 ami-068d0af5e3c08e618 12.14.1. AWS regions without a published RHCOS AMI You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs. A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file. 12.14.2. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.15.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 12.15. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 12.15.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 12.20. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 12.16. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 12.16.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 12.21. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.17. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 12.17.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 12.22. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.18. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. You can view details about the running instances that are created by using the AWS EC2 console . 12.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 12.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.22. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 12.22.1. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information. 12.22.1.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 12.22.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 12.23. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 12.24. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 12.25. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 12.26. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 12.27. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 12.28. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 12.29. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'", "ami-0af1d3b7fa5be2131", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_aws/installing-aws-user-infra
Chapter 9. CSISnapshotController [operator.openshift.io/v1]
Chapter 9. CSISnapshotController [operator.openshift.io/v1] Description CSISnapshotController provides a means to configure an operator to manage the CSI snapshots. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 9.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 9.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 9.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 9.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string reason string status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 9.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 9.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Required group name namespace resource Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 9.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/csisnapshotcontrollers DELETE : delete collection of CSISnapshotController GET : list objects of kind CSISnapshotController POST : create a CSISnapshotController /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name} DELETE : delete a CSISnapshotController GET : read the specified CSISnapshotController PATCH : partially update the specified CSISnapshotController PUT : replace the specified CSISnapshotController /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name}/status GET : read status of the specified CSISnapshotController PATCH : partially update status of the specified CSISnapshotController PUT : replace status of the specified CSISnapshotController 9.2.1. /apis/operator.openshift.io/v1/csisnapshotcontrollers HTTP method DELETE Description delete collection of CSISnapshotController Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CSISnapshotController Table 9.2. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotControllerList schema 401 - Unauthorized Empty HTTP method POST Description create a CSISnapshotController Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body CSISnapshotController schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 201 - Created CSISnapshotController schema 202 - Accepted CSISnapshotController schema 401 - Unauthorized Empty 9.2.2. /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the CSISnapshotController HTTP method DELETE Description delete a CSISnapshotController Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSISnapshotController Table 9.9. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSISnapshotController Table 9.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSISnapshotController Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body CSISnapshotController schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 201 - Created CSISnapshotController schema 401 - Unauthorized Empty 9.2.3. /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name}/status Table 9.15. Global path parameters Parameter Type Description name string name of the CSISnapshotController HTTP method GET Description read status of the specified CSISnapshotController Table 9.16. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CSISnapshotController Table 9.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.18. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CSISnapshotController Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body CSISnapshotController schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 201 - Created CSISnapshotController schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/csisnapshotcontroller-operator-openshift-io-v1
3.3. Viewing the Raw Cluster Configuration
3.3. Viewing the Raw Cluster Configuration Although you should not edit the cluster configuration file directly, you can view the raw cluster configuration with the pcs cluster cib command. You can save the raw cluster configuration to a specified file with the pcs cluster cib filename command as described in Section 3.4, "Saving a Configuration Change to a File" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-pcsxmlview-haar
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_and_allocating_storage_resources/providing-feedback-on-red-hat-documentation_rhodf
Chapter 2. Configuring Identity Management for smart card authentication
Chapter 2. Configuring Identity Management for smart card authentication Identity Management (IdM) supports smart card authentication with: User certificates issued by the IdM certificate authority User certificates issued by an external certificate authority You can configure smart card authentication in IdM for both types of certificates. In this scenario, the rootca.pem CA certificate is the file containing the certificate of a trusted external certificate authority. Note Currently, IdM does not support importing multiple CAs that share the same Subject Distinguished Name (DN) but are cryptographically different. For information about smart card authentication in IdM, see Understanding smart card authentication . For more details on configuring smart card authentication: Configuring the IdM server for smart card authentication Configuring the IdM client for smart card authentication Adding a certificate to a user entry in the IdM Web UI Adding a certificate to a user entry in the IdM CLI Installing tools for managing and using smart cards Storing a certificate on a smart card Logging in to IdM with smart cards Configuring GDM access using smart card authentication Configuring su access using smart card authentication 2.1. Configuring the IdM server for smart card authentication If you want to enable smart card authentication for users whose certificates have been issued by the certificate authority (CA) of the <EXAMPLE.ORG> domain that your Identity Management (IdM) CA trusts, you must obtain the following certificates so that you can add them when running the ipa-advise script that configures the IdM server: The certificate of the root CA that has either issued the certificate for the <EXAMPLE.ORG> CA directly, or through one or more of its sub-CAs. You can download the certificate chain from a web page whose certificate has been issued by the authority. For details, see Steps 1 - 4a in Configuring a browser to enable certificate authentication . The IdM CA certificate. You can obtain the CA certificate from the /etc/ipa/ca.crt file on the IdM server on which an IdM CA instance is running. The certificates of all of the intermediate CAs; that is, intermediate between the <EXAMPLE.ORG> CA and the IdM CA. To configure an IdM server for smart card authentication: Obtain files with the CA certificates in the PEM format. Run the built-in ipa-advise script. Reload the system configuration. Prerequisites You have root access to the IdM server. You have the root CA certificate and all the intermediate CA certificates. Procedure Create a directory in which you will do the configuration: Navigate to the directory: Obtain the relevant CA certificates stored in files in PEM format. If your CA certificate is stored in a file of a different format, such as DER, convert it to PEM format. The IdM Certificate Authority certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Convert a DER file to a PEM file: For convenience, copy the certificates to the directory in which you want to do the configuration: Optional: If you use certificates of external certificate authorities, use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: Generate a configuration script with the in-built ipa-advise utility, using the administrator's privileges: The config-server-for-smart-card-auth.sh script performs the following actions: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Execute the script, adding the PEM files containing the root CA and sub CA certificates as arguments: Note Ensure that you add the root CA's certificate as an argument before any sub CA certificates and that the CA or sub CA certificates have not expired. Optional: If the certificate authority that issued the user certificate does not provide any Online Certificate Status Protocol (OCSP) responder, you may need to disable OCSP check for authentication to the IdM Web UI: Set the SSLOCSPEnable parameter to off in the /etc/httpd/conf.d/ssl.conf file: Restart the Apache daemon (httpd) for the changes to take effect immediately: Warning Do not disable the OCSP check if you only use user certificates issued by the IdM CA. OCSP responders are part of IdM. For instructions on how to keep the OCSP check enabled, and yet prevent a user certificate from being rejected by the IdM server if it does not contain the information about the location at which the CA that issued the user certificate listens for OCSP service requests, see the SSLOCSPDefaultResponder directive in Apache mod_ssl configuration options . The server is now configured for smart card authentication. Note To enable smart card authentication in the whole topology, run the procedure on each IdM server. 2.2. Using Ansible to configure the IdM server for smart card authentication You can use Ansible to enable smart card authentication for users whose certificates have been issued by the certificate authority (CA) of the <EXAMPLE.ORG> domain that your Identity Management (IdM) CA trusts. To do that, you must obtain the following certificates so that you can use them when running an Ansible playbook with the ipasmartcard_server ansible-freeipa role script: The certificate of the root CA that has either issued the certificate for the <EXAMPLE.ORG> CA directly, or through one or more of its sub-CAs. You can download the certificate chain from a web page whose certificate has been issued by the authority. For details, see Step 4 in Configuring a browser to enable certificate authentication . The IdM CA certificate. You can obtain the CA certificate from the /etc/ipa/ca.crt file on any IdM CA server. The certificates of all of the CAs that are intermediate between the <EXAMPLE.ORG> CA and the IdM CA. Prerequisites You have root access to the IdM server. You know the IdM admin password. You have the root CA certificate, the IdM CA certificate, and all the intermediate CA certificates. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure If your CA certificates are stored in files of a different format, such as DER , convert them to PEM format: The IdM Certificate Authority certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Optional: Use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: Navigate to your ~/ MyPlaybooks / directory: Create a subdirectory dedicated to the CA certificates: For convenience, copy all the required certificates to the ~/MyPlaybooks/SmartCard/ directory: In your Ansible inventory file, specify the following: The IdM servers that you want to configure for smart card authentication. The IdM administrator password. The paths to the certificates of the CAs in the following order: The root CA certificate file The intermediate CA certificates files The IdM CA certificate file The file can look as follows: Create an install-smartcard-server.yml playbook with the following content: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: The ipasmartcard_server Ansible role performs the following actions: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Optional: If the certificate authority that issued the user certificate does not provide any Online Certificate Status Protocol (OCSP) responder, you may need to disable OCSP check for authentication to the IdM Web UI: Connect to the IdM server as root : Set the SSLOCSPEnable parameter to off in the /etc/httpd/conf.d/ssl.conf file: Restart the Apache daemon (httpd) for the changes to take effect immediately: Warning Do not disable the OCSP check if you only use user certificates issued by the IdM CA. OCSP responders are part of IdM. For instructions on how to keep the OCSP check enabled, and yet prevent a user certificate from being rejected by the IdM server if it does not contain the information about the location at which the CA that issued the user certificate listens for OCSP service requests, see the SSLOCSPDefaultResponder directive in Apache mod_ssl configuration options . The server listed in the inventory file is now configured for smart card authentication. Note To enable smart card authentication in the whole topology, set the hosts variable in the Ansible playbook to ipacluster : Additional resources Sample playbooks using the ipasmartcard_server role in the /usr/share/doc/ansible-freeipa/playbooks/ directory 2.3. Configuring the IdM client for smart card authentication Follow this procedure to configure IdM clients for smart card authentication. The procedure needs to be run on each IdM system, a client or a server, to which you want to connect while using a smart card for authentication. For example, to enable an ssh connection from host A to host B, the script needs to be run on host B. As an administrator, run this procedure to enable smart card authentication using The ssh protocol For details see Configuring SSH access using smart card authentication . The console login The GNOME Display Manager (GDM) The su command This procedure is not required for authenticating to the IdM Web UI. Authenticating to the IdM Web UI involves two hosts, neither of which needs to be an IdM client: The machine on which the browser is running. The machine can be outside of the IdM domain. The IdM server on which httpd is running. The following procedure assumes that you are configuring smart card authentication on an IdM client, not an IdM server. For this reason you need two computers: an IdM server to generate the configuration script, and the IdM client on which to run the script. Prerequisites Your IdM server has been configured for smart card authentication, as described in Configuring the IdM server for smart card authentication . You have root access to the IdM server and the IdM client. You have the root CA certificate and all the intermediate CA certificates. You installed the IdM client with the --mkhomedir option to ensure remote users can log in successfully. If you do not create a home directory, the default login location is the root of the directory structure, / . Procedure On an IdM server, generate a configuration script with ipa-advise using the administrator's privileges: The config-client-for-smart-card-auth.sh script performs the following actions: It configures the smart card daemon. It sets the system-wide truststore. It configures the System Security Services Daemon (SSSD) to allow users to authenticate with either their user name and password or with their smart card. For more details on SSSD profile options for smart card authentication, see Smart card authentication options in RHEL . From the IdM server, copy the script to a directory of your choice on the IdM client machine: From the IdM server, copy the CA certificate files in PEM format for convenience to the same directory on the IdM client machine as used in the step: On the client machine, execute the script, adding the PEM files containing the CA certificates as arguments: Note Ensure that you add the root CA's certificate as an argument before any sub CA certificates and that the CA or sub CA certificates have not expired. The client is now configured for smart card authentication. 2.4. Using Ansible to configure IdM clients for smart card authentication Follow this procedure to use the ansible-freeipa ipasmartcard_client module to configure specific Identity Management (IdM) clients to permit IdM users to authenticate with a smart card. Run this procedure to enable smart card authentication for IdM users that use any of the following to access IdM: The ssh protocol For details see Configuring SSH access using smart card authentication . The console login The GNOME Display Manager (GDM) The su command Note This procedure is not required for authenticating to the IdM Web UI. Authenticating to the IdM Web UI involves two hosts, neither of which needs to be an IdM client: The machine on which the browser is running. The machine can be outside of the IdM domain. The IdM server on which httpd is running. Prerequisites Your IdM server has been configured for smart card authentication, as described in Using Ansible to configure the IdM server for smart card authentication . You have root access to the IdM server and the IdM client. You have the root CA certificate, the IdM CA certificate, and all the intermediate CA certificates. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure If your CA certificates are stored in files of a different format, such as DER , convert them to PEM format: The IdM CA certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Optional: Use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: On your Ansible control node, navigate to your ~/ MyPlaybooks / directory: Create a subdirectory dedicated to the CA certificates: For convenience, copy all the required certificates to the ~/MyPlaybooks/SmartCard/ directory, for example: In your Ansible inventory file, specify the following: The IdM clients that you want to configure for smart card authentication. The IdM administrator password. The paths to the certificates of the CAs in the following order: The root CA certificate file The intermediate CA certificates files The IdM CA certificate file The file can look as follows: Create an install-smartcard-clients.yml playbook with the following content: Save the file. Run the Ansible playbook. Specify the playbook and inventory files: The ipasmartcard_client Ansible role performs the following actions: It configures the smart card daemon. It sets the system-wide truststore. It configures the System Security Services Daemon (SSSD) to allow users to authenticate with either their user name and password or their smart card. For more details on SSSD profile options for smart card authentication, see Smart card authentication options in RHEL . The clients listed in the ipaclients section of the inventory file are now configured for smart card authentication. Note If you have installed the IdM clients with the --mkhomedir option, remote users will be able to log in to their home directories. Otherwise, the default login location is the root of the directory structure, / . Additional resources Sample playbooks using the ipasmartcard_server role in the /usr/share/doc/ansible-freeipa/playbooks/ directory 2.5. Adding a certificate to a user entry in the IdM Web UI Follow this procedure to add an external certificate to a user entry in IdM Web UI. Note Instead of uploading the whole certificate, it is also possible to upload certificate mapping data to a user entry in IdM. User entries containing either full certificates or certificate mapping data can be used in conjunction with corresponding certificate mapping rules to facilitate the configuration of smart card authentication for system administrators. For details, see Certificate mapping rules for configuring authentication . Note If the user's certificate has been issued by the IdM Certificate Authority, the certificate is already stored in the user entry, and you do not need to follow this procedure. Prerequisites You have the certificate that you want to add to the user entry at your disposal. Procedure Log into the IdM Web UI as an administrator if you want to add a certificate to another user. For adding a certificate to your own profile, you do not need the administrator's credentials. Navigate to Users Active users sc_user . Find the Certificate option and click Add . On the command line, display the certificate in the PEM format using the cat utility or a text editor: Copy and paste the certificate from the CLI into the window that has opened in the Web UI. Click Add . Figure 2.1. Adding a new certificate in the IdM Web UI The sc_user entry now contains an external certificate. 2.6. Adding a certificate to a user entry in the IdM CLI Follow this procedure to add an external certificate to a user entry in IdM CLI. Note Instead of uploading the whole certificate, it is also possible to upload certificate mapping data to a user entry in IdM. User entries containing either full certificates or certificate mapping data can be used in conjunction with corresponding certificate mapping rules to facilitate the configuration of smart card authentication for system administrators. For details, see Certificate mapping rules for configuring authentication . Note If the user's certificate has been issued by the IdM Certificate Authority, the certificate is already stored in the user entry, and you do not need to follow this procedure. Prerequisites You have the certificate that you want to add to the user entry at your disposal. Procedure Log into the IdM CLI as an administrator if you want to add a certificate to another user: For adding a certificate to your own profile, you do not need the administrator's credentials: Create an environment variable containing the certificate with the header and footer removed and concatenated into a single line, which is the format expected by the ipa user-add-cert command: Note that certificate in the testuser.crt file must be in the PEM format. Add the certificate to the profile of sc_user using the ipa user-add-cert command: The sc_user entry now contains an external certificate. 2.7. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 2.8. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 2.9. Logging in to IdM with smart cards Follow this procedure to use smart cards for logging in to the IdM Web UI. Prerequisites The web browser is configured for using smart card authentication. The IdM server is configured for smart card authentication. The certificate installed on your smart card is either issued by the IdM server or has been added to the user entry in IdM. You know the PIN required to unlock the smart card. The smart card has been inserted into the reader. Procedure Open the IdM Web UI in the browser. Click Log In Using Certificate . If the Password Required dialog box opens, add the PIN to unlock the smart card and click the OK button. The User Identification Request dialog box opens. If the smart card contains more than one certificate, select the certificate you want to use for authentication in the drop down list below Choose a certificate to present as identification . Click the OK button. Now you are successfully logged in to the IdM Web UI. 2.10. Logging in to GDM using smart card authentication on an IdM client The GNOME Desktop Manager (GDM) requires authentication. You can use your password; however, you can also use a smart card for authentication. Follow this procedure to use smart card authentication to access GDM. Prerequisites The system has been configured for smart card authentication. For details, see Configuring the IdM client for smart card authentication . The smart card contains your certificate and private key. The user account is a member of the IdM domain. The certificate on the smart card maps to the user entry through: Assigning the certificate to a particular user entry. For details, see, Adding a certificate to a user entry in the IdM Web UI or Adding a certificate to a user entry in the IdM CLI . The certificate mapping data being applied to the account. For details, see Certificate mapping rules for configuring authentication on smart cards . Procedure Insert the smart card in the reader. Enter the smart card PIN. Click Sign In . You are successfully logged in to the RHEL system and you have a TGT provided by the IdM server. Verification In the Terminal window, enter klist and check the result: 2.11. Using smart card authentication with the su command Changing to a different user requires authentication. You can use a password or a certificate. Follow this procedure to use your smart card with the su command. It means that after entering the su command, you are prompted for the smart card PIN. Prerequisites Your IdM server and client have been configured for smart card authentication. See Configuring the IdM server for smart card authentication See Configuring the IdM client for smart card authentication The smart card contains your certificate and private key. See Storing a certificate on a smart card The card is inserted in the reader and connected to the computer. Procedure In a terminal window, change to a different user with the su command: If the configuration is correct, you are prompted to enter the smart card PIN.
[ "mkdir ~/SmartCard/", "cd ~/SmartCard/", "openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM", "cp /tmp/rootca.pem ~/SmartCard/ cp /tmp/subca.pem ~/SmartCard/ cp /tmp/issuingca.pem ~/SmartCard/", "openssl x509 -noout -text -in rootca.pem | more", "kinit admin ipa-advise config-server-for-smart-card-auth > config-server-for-smart-card-auth.sh", "chmod +x config-server-for-smart-card-auth.sh ./config-server-for-smart-card-auth.sh rootca.pem subca.pem issuingca.pem Ticket cache:KEYRING:persistent:0:0 Default principal: [email protected] [...] Systemwide CA database updated. The ipa-certupdate command was successful", "SSLOCSPEnable off", "systemctl restart httpd", "openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM", "openssl x509 -noout -text -in root-ca.pem | more", "cd ~/ MyPlaybooks /", "mkdir SmartCard/", "cp /tmp/root-ca.pem ~/MyPlaybooks/SmartCard/ cp /tmp/intermediate-ca.pem ~/MyPlaybooks/SmartCard/ cp /etc/ipa/ca.crt ~/MyPlaybooks/SmartCard/ipa-ca.crt", "[ipaserver] ipaserver.idm.example.com [ipareplicas] ipareplica1.idm.example.com ipareplica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password= \"{{ ipaadmin_password }}\" ipasmartcard_server_ca_certs=/home/<user_name>/MyPlaybooks/SmartCard/root-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/intermediate-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/ipa-ca.crt", "--- - name: Playbook to set up smart card authentication for an IdM server hosts: ipaserver become: true roles: - role: ipasmartcard_server state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory install-smartcard-server.yml", "ssh [email protected]", "SSLOCSPEnable off", "systemctl restart httpd", "--- - name: Playbook to setup smartcard for IPA server and replicas hosts: ipacluster [...]", "kinit admin ipa-advise config-client-for-smart-card-auth > config-client-for-smart-card-auth.sh", "scp config-client-for-smart-card-auth.sh root @ client.idm.example.com:/root/SmartCard/ Password: config-client-for-smart-card-auth.sh 100% 2419 3.5MB/s 00:00", "scp {rootca.pem,subca.pem,issuingca.pem} root @ client.idm.example.com:/root/SmartCard/ Password: rootca.pem 100% 1237 9.6KB/s 00:00 subca.pem 100% 2514 19.6KB/s 00:00 issuingca.pem 100% 2514 19.6KB/s 00:00", "kinit admin chmod +x config-client-for-smart-card-auth.sh ./config-client-for-smart-card-auth.sh rootca.pem subca.pem issuingca.pem Ticket cache:KEYRING:persistent:0:0 Default principal: [email protected] [...] Systemwide CA database updated. The ipa-certupdate command was successful", "openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM", "openssl x509 -noout -text -in root-ca.pem | more", "cd ~/ MyPlaybooks /", "mkdir SmartCard/", "cp /tmp/root-ca.pem ~/MyPlaybooks/SmartCard/ cp /tmp/intermediate-ca.pem ~/MyPlaybooks/SmartCard/ cp /etc/ipa/ca.crt ~/MyPlaybooks/SmartCard/ipa-ca.crt", "[ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword ipasmartcard_client_ca_certs=/home/<user_name>/MyPlaybooks/SmartCard/root-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/intermediate-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/ipa-ca.crt", "--- - name: Playbook to set up smart card authentication for an IdM client hosts: ipaclients become: true roles: - role: ipasmartcard_client state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory install-smartcard-clients.yml", "[user@client SmartCard]USD cat testuser.crt", "[user@client SmartCard]USD kinit admin", "[user@client SmartCard]USD kinit sc_user", "[user@client SmartCard]USD export CERT=`openssl x509 -outform der -in testuser.crt | base64 -w0 -`", "[user@client SmartCard]USD ipa user-add-cert sc_user --certificate=USDCERT", "yum -y install opensc gnutls-utils", "systemctl start pcscd", "systemctl status pcscd", "pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:", "pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name", "pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name", "pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init -F", "klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 04/20/2020 13:58:24 04/20/2020 23:58:24 krbtgt/[email protected] renew until 04/27/2020 08:58:15", "su - example.user PIN for smart_card" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_smart_card_authentication/configuring-idm-for-smart-card-auth_managing-smart-card-authentication
Chapter 3. Enabling Linux control group version 1 (cgroup v1)
Chapter 3. Enabling Linux control group version 1 (cgroup v1) As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.18 will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 or later will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section. 3.1. Enabling Linux cgroup v1 during installation You can enable Linux control group version 1 (cgroup v1) when you install a cluster by creating installation manifests. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Procedure Create or edit the node.config object to specify the v1 cgroup: apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: "v2" Proceed with the installation as usual. Additional resources OpenShift Container Platform installation overview Configuring the Linux cgroup on your nodes
[ "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v2\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installation_configuration/enabling-cgroup-v1
Chapter 1. Introduction to the Ceph Orchestrator
Chapter 1. Introduction to the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm utility that provides the ability to discover devices and create services in a Red Hat Ceph Storage cluster. 1.1. Use of the Ceph Orchestrator Red Hat Ceph Storage Orchestrators are manager modules that primarily act as a bridge between a Red Hat Ceph Storage cluster and deployment tools like Rook and Cephadm for a unified experience. They also integrate with the Ceph command line interface and Ceph Dashboard. The following is a workflow diagram of Ceph Orchestrator: Note NFS-Ganesha gateway is not supported, starting from Red Hat Ceph Storage 5.1 release. Types of Red Hat Ceph Storage Orchestrators There are three main types of Red Hat Ceph Storage Orchestrators: Orchestrator CLI : These are common APIs used in Orchestrators and include a set of commands that can be implemented. These APIs also provide a common command line interface (CLI) to orchestrate ceph-mgr modules with external orchestration services. The following are the nomenclature used with the Ceph Orchestrator: Host : This is the host name of the physical host and not the pod name, DNS name, container name, or host name inside the container. Service type : This is the type of the service, such as nfs, mds, osd, mon, rgw, and mgr. Service : A functional service provided by a Ceph storage cluster such as monitors service, managers service, OSD services, Ceph Object Gateway service, and NFS service. Daemon : A specific instance of a service deployed by one or more hosts such as Ceph Object Gateway services can have different Ceph Object Gateway daemons running in three different hosts. Cephadm Orchestrator - This is a Ceph Orchestrator module that does not rely on an external tool such as Rook or Ansible, but rather manages nodes in a cluster by establishing an SSH connection and issuing explicit management commands. This module is intended for day-one and day-two operations. Using the Cephadm Orchestrator is the recommended way of installing a Ceph storage cluster without leveraging any deployment frameworks like Ansible. The idea is to provide the manager daemon with access to an SSH configuration and key that is able to connect to all nodes in a cluster to perform any management operations, like creating an inventory of storage devices, deploying and replacing OSDs, or starting and stopping Ceph daemons. In addition, the Cephadm Orchestrator will deploy container images managed by systemd in order to allow independent upgrades of co-located services. This orchestrator will also likely highlight a tool that encapsulates all necessary operations to manage the deployment of container image based services on the current host, including a command that bootstraps a minimal cluster running a Ceph Monitor and a Ceph Manager. Rook Orchestrator - Rook is an orchestration tool that uses the Kubernetes Rook operator to manage a Ceph storage cluster running inside a Kubernetes cluster. The rook module provides integration between Ceph's Orchestrator framework and Rook. Rook is an open source cloud-native storage operator for Kubernetes. Rook follows the "operator" model, in which a custom resource definition (CRD) object is defined in Kubernetes to describe a Ceph storage cluster and its desired state, and a rook operator daemon is running in a control loop that compares the current cluster state to desired state and takes steps to make them converge. The main object describing Ceph's desired state is the Ceph storage cluster CRD, which includes information about which devices should be consumed by OSDs, how many monitors should be running, and what version of Ceph should be used. Rook defines several other CRDs to describe RBD pools, CephFS file systems, and so on. The Rook Orchestrator module is the glue that runs in the ceph-mgr daemon and implements the Ceph orchestration API by making changes to the Ceph storage cluster in Kubernetes that describe desired cluster state. A Rook cluster's ceph-mgr daemon is running as a Kubernetes pod, and hence, the rook module can connect to the Kubernetes API without any explicit configuration.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/operations_guide/introduction-to-the-ceph-orchestrator
Chapter 3. Operator Framework glossary of common terms
Chapter 3. Operator Framework glossary of common terms Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following terms are related to the Operator Framework, including Operator Lifecycle Manager (OLM) v1. 3.1. Bundle In the bundle format, a bundle is a collection of an Operator CSV, manifests, and metadata. Together, they form a unique version of an Operator that can be installed onto the cluster. 3.2. Bundle image In the bundle format, a bundle image is a container image that is built from Operator manifests and that contains one bundle. Bundle images are stored and distributed by Open Container Initiative (OCI) spec container registries, such as Quay.io or DockerHub. 3.3. Catalog source A catalog source represents a store of metadata that OLM can query to discover and install Operators and their dependencies. 3.4. Channel A channel defines a stream of updates for an Operator and is used to roll out updates for subscribers. The head points to the latest version of that channel. For example, a stable channel would have all stable versions of an Operator arranged from the earliest to the latest. An Operator can have several channels, and a subscription binding to a certain channel would only look for updates in that channel. 3.5. Channel head A channel head refers to the latest known update in a particular channel. 3.6. Cluster service version A cluster service version (CSV) is a YAML manifest created from Operator metadata that assists OLM in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on. 3.7. Dependency An Operator may have a dependency on another Operator being present in the cluster. For example, the Vault Operator has a dependency on the etcd Operator for its data persistence layer. OLM resolves dependencies by ensuring that all specified versions of Operators and CRDs are installed on the cluster during the installation phase. This dependency is resolved by finding and installing an Operator in a catalog that satisfies the required CRD API, and is not related to packages or bundles. 3.8. Index image In the bundle format, an index image refers to an image of a database (a database snapshot) that contains information about Operator bundles including CSVs and CRDs of all versions. This index can host a history of Operators on a cluster and be maintained by adding or removing Operators using the opm CLI tool. 3.9. Install plan An install plan is a calculated list of resources to be created to automatically install or upgrade a CSV. 3.10. Multitenancy A tenant in OpenShift Container Platform is a user or group of users that share common access and privileges for a set of deployed workloads, typically represented by a namespace or project. You can use tenants to provide a level of isolation between different groups or teams. When a cluster is shared by multiple users or groups, it is considered a multitenant cluster. 3.11. Operator group An Operator group configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their CR in a list of namespaces or cluster-wide. 3.12. Package In the bundle format, a package is a directory that encloses all released history of an Operator with each version. A released version of an Operator is described in a CSV manifest alongside the CRDs. 3.13. Registry A registry is a database that stores bundle images of Operators, each with all of its latest and historical versions in all channels. 3.14. Subscription A subscription keeps CSVs up to date by tracking a channel in a package. 3.15. Update graph An update graph links versions of CSVs together, similar to the update graph of any other packaged software. Operators can be installed sequentially, or certain versions can be skipped. The update graph is expected to grow only at the head with newer versions being added.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/extensions/of-terms
Chapter 7. Configuring the Red Hat build of OptaPlanner solver
Chapter 7. Configuring the Red Hat build of OptaPlanner solver You can use the following methods to configure your OptaPlanner solver: Use an XML file. Use the SolverConfig API. Add class annotations and JavaBean property annotations on the domain model. Control the method that OptaPlanner uses to access your domain. Define custom properties. 7.1. Using an XML file to configure the OptaPlanner solver You can use an XML file to configure the solver. In a typical project that follows the Maven directory structure, after you build a Solver instance with the SolverFactory , the solverConfig XML file is located in the USDPROJECT_DIR/src/main/resources/org/optaplanner/examples/<PROJECT>/solver directory, where <PROJECT> is the name of your OptaPlanner project. Alternatively, you can create a SolverFactory from a file with SolverFactory.createFromXmlFile() . However, for portability reasons, a classpath resource is recommended. Both a Solver and a SolverFactory have a generic type called Solution_ , which is the class representing a planning problem and solution. OptaPlanner makes it relatively easy to switch optimization algorithms by changing the configuration. Procedure Build a Solver instance with the SolverFactory . Configure the solver configuration XML file: Define the model. Define the score function. Optional: Configure the optimization algorithm. The following example is a solver XML file for the NQueens problem: <?xml version="1.0" encoding="UTF-8"?> <solver xmlns="https://www.optaplanner.org/xsd/solver" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://www.optaplanner.org/xsd/solver https://www.optaplanner.org/xsd/solver/solver.xsd"> <!-- Define the model --> <solutionClass>org.optaplanner.examples.nqueens.domain.NQueens</solutionClass> <entityClass>org.optaplanner.examples.nqueens.domain.Queen</entityClass> <!-- Define the score function --> <scoreDirectorFactory> <scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensConstraints.drl</scoreDrl> </scoreDirectorFactory> <!-- Configure the optimization algorithms (optional) --> <termination> ... </termination> <constructionHeuristic> ... </constructionHeuristic> <localSearch> ... </localSearch> </solver> Note On some environments, for example OSGi and JBoss modules, classpath resources such as the solver config, score DRLs, and domain classe) in your JAR files might not be available to the default ClassLoader of the optaplanner-core JAR file. In those cases, provide the ClassLoader of your classes as a parameter: SolverFactory<NQueens> solverFactory = SolverFactory.createFromXmlResource( ".../nqueensSolverConfig.xml", getClass().getClassLoader()); Configure the SolverFactory with a solver configuration XML file, provided as a classpath resource as defined by ClassLoader.getResource() : SolverFasctory<NQueens> solverFactory = SolverFactory.createFromXmlResource( "org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml"); Solver<NQueens> solver = solverFactory.buildSolver(); 7.2. Using the Java API to configure the OptaPlanner solver You can configure a solver by using the SolverConfig API. This is especially useful to change values dynamically at runtime. The following example changes the running time based on system properties before building the Solver in the NQueens project: SolverConfig solverConfig = SolverConfig.createFromXmlResource( "org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml"); solverConfig.withTerminationConfig(new TerminationConfig() .withMinutesSpentLimit(userInput)); SolverFactory<NQueens> solverFactory = SolverFactory.create(solverConfig); Solver<NQueens> solver = solverFactory.buildSolver(); Every element in the solver configuration XML file is available as a Config class or a property on a Config class in the package namespace org.optaplanner.core.config . These Config classes are the Java representation of the XML format. They build the runtime components of the package namespace org.optaplanner.core.impl and assemble them into an efficient Solver . Note To configure a SolverFactory dynamically for each user request, build a template SolverConfig during initialization and copy it with the copy constructor for each user request. The following example shows how to do this with the NQueens problem: private SolverConfig template; public void init() { template = SolverConfig.createFromXmlResource( "org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml"); template.setTerminationConfig(new TerminationConfig()); } // Called concurrently from different threads public void userRequest(..., long userInput) { SolverConfig solverConfig = new SolverConfig(template); // Copy it solverConfig.getTerminationConfig().setMinutesSpentLimit(userInput); SolverFactory<NQueens> solverFactory = SolverFactory.create(solverConfig); Solver<NQueens> solver = solverFactory.buildSolver(); ... } 7.3. OptaPlanner annotation You must specify which classes in your domain model are planning entities, which properties are planning variables, and so on. Use one of the following methods to add annotations to your OptaPlanner project: Add class annotations and JavaBean property annotations on the domain model. The property annotations must be on the getter method, not on the setter method. Annotated getter methods do not need to be public. This is the recommended method. Add class annotations and field annotations on the domain model. Annotated fields do not need to be public. 7.4. Specifying OptaPlanner domain access By default, OptaPlanner accesses your domain using reflection. Reflection is reliable but slow compared to direct access. Alternatively, you can configure OptaPlanner to access your domain using Gizmo, which will generate bytecode that directly accesses the fields and methods of your domain without reflection. However, this method has the following restrictions: The planning annotations can only be on public fields and public getters. io.quarkus.gizmo:gizmo must be on the classpath. Note These restrictions do not apply when you use OptaPlanner with Quarkus because Gizmo is the default domain access type. Procedure To use Gizmo outside of Quarkus, set the domainAccessType in the solver configuration: <solver> <domainAccessType>GIZMO</domainAccessType> </solver> 7.5. Configuring custom properties In your OptaPlanner projects, you can add custom properties to solver configuration elements that instantiate classes and have documents that explicitly mention custom properties. Prerequisites You have a solver. Procedure Add a custom property. For example, if your EasyScoreCalculator has heavy calculations which are cached and you want to increase the cache size in one benchmark add the myCacheSize property: <scoreDirectorFactory> <easyScoreCalculatorClass>...MyEasyScoreCalculator</easyScoreCalculatorClass> <easyScoreCalculatorCustomProperties> <property name="myCacheSize" value="1000"/><!-- Override value --> </easyScoreCalculatorCustomProperties> </scoreDirectorFactory> Add a public setter for each custom property, which is called when a Solver is built. public class MyEasyScoreCalculator extends EasyScoreCalculator<MySolution, SimpleScore> { private int myCacheSize = 500; // Default value @SuppressWarnings("unused") public void setMyCacheSize(int myCacheSize) { this.myCacheSize = myCacheSize; } ... } Most value types are supported, including boolean , int , double , BigDecimal , String and enums .
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <solver xmlns=\"https://www.optaplanner.org/xsd/solver\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"https://www.optaplanner.org/xsd/solver https://www.optaplanner.org/xsd/solver/solver.xsd\"> <!-- Define the model --> <solutionClass>org.optaplanner.examples.nqueens.domain.NQueens</solutionClass> <entityClass>org.optaplanner.examples.nqueens.domain.Queen</entityClass> <!-- Define the score function --> <scoreDirectorFactory> <scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensConstraints.drl</scoreDrl> </scoreDirectorFactory> <!-- Configure the optimization algorithms (optional) --> <termination> </termination> <constructionHeuristic> </constructionHeuristic> <localSearch> </localSearch> </solver>", "SolverFactory<NQueens> solverFactory = SolverFactory.createFromXmlResource( \".../nqueensSolverConfig.xml\", getClass().getClassLoader());", "SolverFasctory<NQueens> solverFactory = SolverFactory.createFromXmlResource( \"org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml\"); Solver<NQueens> solver = solverFactory.buildSolver();", "SolverConfig solverConfig = SolverConfig.createFromXmlResource( \"org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml\"); solverConfig.withTerminationConfig(new TerminationConfig() .withMinutesSpentLimit(userInput)); SolverFactory<NQueens> solverFactory = SolverFactory.create(solverConfig); Solver<NQueens> solver = solverFactory.buildSolver();", "private SolverConfig template; public void init() { template = SolverConfig.createFromXmlResource( \"org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml\"); template.setTerminationConfig(new TerminationConfig()); } // Called concurrently from different threads public void userRequest(..., long userInput) { SolverConfig solverConfig = new SolverConfig(template); // Copy it solverConfig.getTerminationConfig().setMinutesSpentLimit(userInput); SolverFactory<NQueens> solverFactory = SolverFactory.create(solverConfig); Solver<NQueens> solver = solverFactory.buildSolver(); }", "<solver> <domainAccessType>GIZMO</domainAccessType> </solver>", "<scoreDirectorFactory> <easyScoreCalculatorClass>...MyEasyScoreCalculator</easyScoreCalculatorClass> <easyScoreCalculatorCustomProperties> <property name=\"myCacheSize\" value=\"1000\"/><!-- Override value --> </easyScoreCalculatorCustomProperties> </scoreDirectorFactory>", "public class MyEasyScoreCalculator extends EasyScoreCalculator<MySolution, SimpleScore> { private int myCacheSize = 500; // Default value @SuppressWarnings(\"unused\") public void setMyCacheSize(int myCacheSize) { this.myCacheSize = myCacheSize; } }" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/configuring-planner-proc_developing-solvers
Using AMQ Streams on OpenShift
Using AMQ Streams on OpenShift Red Hat AMQ 2020.Q4 For use with AMQ Streams 1.6 on OpenShift Container Platform
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_openshift/index
Chapter 4. Entry Point
Chapter 4. Entry Point A user begins interacting with the API through a GET request on the entry point URI consisting of a host and base . Example 4.1. Accessing the API Entry Point If the host is www.example.com and the base is /ovirt-engine/api , the entry point appears with the following request: Note For simplicity, all other examples omit the Host: and Authorization: request headers and assume the base is the default /ovirt-engine/api path. This base path differs depending on your implementation. 4.1. Product Information The entry point contains a product_info element to help an API user determine the legitimacy of the Red Hat Virtualization environment. This includes the name of the product, the vendor and the version . Example 4.2. Verify a genuine Red Hat Virtualization environment The follow elements identify a genuine Red Hat Virtualization 4.0 environment:
[ "GET /ovirt-engine/api HTTP/1.1 Accept: application/xml Host: www.example.com Authorization: [base64 encoded credentials] HTTP/1.1 200 OK Content-Type: application/xml <api> <link rel=\"hosts\" href=\"/ovirt-engine/api/hosts\"/> <link rel=\"vms\" href=\"/ovirt-engine/api/vms\"/> <product_info> <name>Red Hat Virtualization</name> <vendor>Red Hat</vendor> <version revision=\"0\" build=\"0\" minor=\"0\" major=\"4\"/> </product_info> <special_objects> <link rel=\"templates/blank\" href=\"...\"/> <link rel=\"tags/root\" href=\"...\"/> </special_objects> <summary> <vms> <total>10</total> <active>3</active> </vms> <hosts> <total>2</total> <active>2</active> </hosts> <users> <total>8</total> <active>2</active> </users> <storage_domains> <total>2</total> <active>2</active> </storage_domains> </summary> </ovirt-engine/api>", "<api> <product_info> <name>Red Hat Virtualization</name> <vendor>Red Hat</vendor> <version> <build>2</build> <full_version>4.0.2.3-0.1.el7ev</full_version> <major>4</major> <minor>0</minor> <revision>0</revision> </version> </product_info> </ovirt-engine/api>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/chap-entry_point
Chapter 1. OpenShift Container Platform registry overview
Chapter 1. OpenShift Container Platform registry overview OpenShift Container Platform can build images from your source code, deploy them, and manage their lifecycle. It provides an internal, integrated container image registry that can be deployed in your OpenShift Container Platform environment to locally manage images. This overview contains reference information and links for registries commonly used with OpenShift Container Platform, with a focus on the internal image registry. 1.1. Glossary of common terms for OpenShift Container Platform registry This glossary defines the common terms that are used in the registry content. container Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers in data center, a public or private cloud, or your local host. Image Registry Operator The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location. image repository An image repository is a collection of related container images and tags identifying images. mirror registry The mirror registry is a registry that holds the mirror of OpenShift Container Platform images. namespace A namespace isolates groups of resources within a single cluster. OpenShift Container Platform registry OpenShift Container Platform registry is the registry provided by OpenShift Container Platform to manage images. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. private registry A registry is a server that implements the container image registry API. A private registry is a registry that requires authentication to allow users access its contents. public registry A registry is a server that implements the container image registry API. A public registry is a registry that serves its contently publicly. Quay.io A public Red Hat Quay Container Registry instance provided and maintained by Red Hat, that serves most of the container images and Operators to OpenShift Container Platform clusters. registry authentication To push and pull images to and from private image repositories, the registry needs to authenticate its users with credentials. route Exposes a service to allow for network access to pods from users and applications outside the OpenShift Container Platform instance. scale down To decrease the number of replicas. scale up To increase the number of replicas. service A service exposes a running application on a set of pods. 1.2. Integrated OpenShift Container Platform registry OpenShift Container Platform provides a built-in container image registry that runs as a standard workload on the cluster. The registry is configured and managed by an infrastructure Operator. It provides an out-of-the-box solution for users to manage the images that run their workloads, and runs on top of the existing cluster infrastructure. This registry can be scaled up or down like any other cluster workload and does not require specific infrastructure provisioning. In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources. The registry is typically used as a publication target for images built on the cluster, as well as being a source of images for workloads running on the cluster. When a new image is pushed to the registry, the cluster is notified of the new image and other components can react to and consume the updated image. Image data is stored in two locations. The actual image data is stored in a configurable storage location, such as cloud storage or a filesystem volume. The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and imagestreams. Additional resources Image Registry Operator in OpenShift Container Platform 1.3. Third-party registries OpenShift Container Platform can create containers using images from third-party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift Container Platform registry. In this situation, OpenShift Container Platform will fetch tags from the remote registry upon imagestream creation. To refresh the fetched tags, run oc import-image <stream> . When new images are detected, the previously described build and deployment reactions occur. 1.3.1. Authentication OpenShift Container Platform can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift Container Platform to push and pull images to and from private repositories. 1.3.1.1. Registry authentication with Podman Some container image registries require access authorization. Podman is an open source tool for managing containers and container images and interacting with image registries. You can use Podman to authenticate your credentials, pull the registry image, and store local images in a local file system. The following is a generic example of authenticating the registry with Podman. Procedure Use the Red Hat Ecosystem Catalog to search for specific container images from the Red Hat Repository and select the required image. Click Get this image to find the command for your container image. Login by running the following command and entering your username and password to authenticate: USD podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password> Download the image and save it locally by running the following command: USD podman pull registry.redhat.io/<repository_name> 1.4. Red Hat Quay registries If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced registry features in Red Hat Quay include geo-replication, image scanning, and the ability to roll back images. Visit the Quay.io site to set up your own hosted Quay registry account. After that, follow the Quay Tutorial to log in to the Quay registry and start managing your images. You can access your Red Hat Quay registry from OpenShift Container Platform like any remote container image registry. Additional resources Red Hat Quay product documentation 1.5. Authentication enabled Red Hat registry All container images available through the Container images section of the Red Hat Ecosystem Catalog are hosted on an image registry, registry.redhat.io . The registry, registry.redhat.io , requires authentication for access to images and hosted content on OpenShift Container Platform. Following the move to the new registry, the existing registry will be available for a period of time. Note OpenShift Container Platform pulls images from registry.redhat.io , so you must configure your cluster to use it. The new registry uses standard OAuth mechanisms for authentication, with the following methods: Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters. Web username and password. This is the standard set of credentials you use to log in to resources such as access.redhat.com . While it is possible to use this authentication method with OpenShift Container Platform, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside OpenShift Container Platform. You can use podman login with your credentials, either username and password or authentication token, to access content on the new registry. All imagestreams point to the new registry, which uses the installation pull secret to authenticate. You must place your credentials in either of the following places: openshift namespace . Your credentials must exist in the openshift namespace so that the imagestreams in the openshift namespace can import. Your host . Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images. Additional resources Registry service accounts
[ "podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>", "podman pull registry.redhat.io/<repository_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/registry/registry-overview
Chapter 1. Security Architecture
Chapter 1. Security Architecture Abstract In the OSGi container, it is possible to deploy applications supporting a variety of security features. Currently, only the Java Authentication and Authorization Service (JAAS) is based on a common, container-wide infrastructure. Other security features are provided separately by the individual products and components deployed in the container. 1.1. OSGi Container Security Overview Figure 1.1, "OSGi Container Security Architecture" shows an overview of the security infrastructure that is used across the container and is accessible to all bundles deployed in the container. This common security infrastructure currently consists of a mechanism for making JAAS realms (or login modules) available to all application bundles. Figure 1.1. OSGi Container Security Architecture JAAS realms A JAAS realm or login module is a plug-in module that provides authentication and authorization data to Java applications, as defined by the Java Authentication and Authorization Service (JAAS) specification. Red Hat Fuse supports a special mechanism for defining JAAS login modules (in either a Spring or a blueprint file), which makes the login module accessible to all bundles in the container. This makes it easy for multiple applications running in the OSGi container to consolidate their security data into a single JAAS realm. karaf realm The OSGi container has a predefined JAAS realm, the karaf realm. Red Hat Fuse uses the karaf realm to provide authentication for remote administration of the OSGi runtime, for the Fuse Management Console, and for JMX management. The karaf realm uses a simple file-based repository, where authentication data is stored in the InstallDir /etc/users.properties file. You can use the karaf realm in your own applications. Simply configure karaf as the name of the JAAS realm that you want to use. Your application then performs authentication using the data from the users.properties file. Console port You can administer the OSGi container remotely either by connecting to the console port with a Karaf client or using the Karaf ssh:ssh command. The console port is secured by a JAAS login feature that connects to the karaf realm. Users that try to connect to the console port will be prompted to enter a username and password that must match one of the accounts from the karaf realm. JMX port You can manage the OSGi container by connecting to the JMX port (for example, using Java's JConsole). The JMX port is also secured by a JAAS login feature that connects to the karaf realm. Application bundles and JAAS security Any application bundles that you deploy into the OSGi container can access the container's JAAS realms. The application bundle simply references one of the existing JAAS realms by name (which corresponds to an instance of a JAAS login module). It is essential, however, that the JAAS realms are defined using the OSGi container's own login configuration mechanism-by default, Java provides a simple file-based login configuration implementation, but you cannot use this implementation in the context of the OSGi container. 1.2. Apache Camel Security Overview Figure 1.2, "Apache Camel Security Architecture" shows an overview of the basic options for securely routing messages between Apache Camel endpoints. Figure 1.2. Apache Camel Security Architecture Alternatives for Apache Camel security As shown in Figure 1.2, "Apache Camel Security Architecture" , you have the following options for securing messages: Endpoint security -part (a) shows a message sent between two routes with secure endpoints. The producer endpoint on the left opens a secure connection (typically using SSL/TLS) to the consumer endpoint on the right. Both of the endpoints support security in this scenario. With endpoint security, it is typically possible to perform some form of peer authentication (and sometimes authorization). Payload security -part (b) shows a message sent between two routes where the endpoints are both insecure . To protect the message from unauthorized snooping in this case, use a payload processor that encrypts the message before sending and decrypts the message after it is received. A limitation of payload security is that it does not provide any kind of authentication or authorization mechanisms. Endpoint security There are several Camel components that support security features. It is important to note, however, that these security features are implemented by the individual components, not by the Camel core. Hence, the kinds of security feature that are supported, and the details of their implementation, vary from component to component. Some of the Camel components that currently support security are, as follows: JMS and ActiveMQ-SSL/TLS security and JAAS security for client-to-broker and broker-to-broker communication. Jetty-HTTP Basic Authentication and SSL/TLS security. CXF-SSL/TLS security and WS-Security. Crypto-creates and verifies digital signatures in order to guarantee message integrity. Netty-SSL/TLS security. MINA-SSL/TLS security. Cometd-SSL/TLS security. glogin and gauth-authorization in the context of Google applications. Payload security Apache Camel provides the following payload security implementations, where the encryption and decryption steps are exposed as data formats on the marshal() and unmarshal() operations the section called "XMLSecurity data format" . the section called "Crypto data format" . XMLSecurity data format The XMLSecurity data format is specifically designed to encrypt XML payloads. When using this data format, you can specify which XML element to encrypt. The default behavior is to encrypt all XML elements. This feature uses a symmetric encryption algorithm. For more details, see http://camel.apache.org/xmlsecurity-dataformat.html . Crypto data format The crypto data format is a general purpose encryption feature that can encrypt any kind of payload. It is based on the Java Cryptographic Extension and implements only symmetric (shared-key) encryption and decryption. For more details, see http://camel.apache.org/crypto.html .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/arch-architecture
Chapter 1. Getting support
Chapter 1. Getting support If you experience difficulty with a procedure described in this documentation, or with Red Hat Quay in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your deployment, you can use the Red Hat Quay debugging tool, or check the health endpoint of your deployment to obtain information about your problem. After you have debugged or obtained health information about your deployment, you can search the Red Hat Knowledgebase for a solution or file a support ticket. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue to the ProjectQuay project. Provide specific details, such as the section name and Red Hat Quay version. 1.1. About the Red Hat Knowledgebase The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat's products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps. The Red Hat Quay Support Team also maintains a Consolidate troubleshooting article for Red Hat Quay that details solutions to common problems. This is an evolving document that can help users navigate various issues effectively and efficiently. 1.2. Searching the Red Hat Knowledgebase In the event of an Red Hat Quay issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase. Prerequisites You have a Red Hat Customer Portal account. Procedure Log in to the Red Hat Customer Portal . In the main Red Hat Customer Portal search field, input keywords and strings relating to the problem, including: Red Hat Quay components (such as database ) Related procedure (such as installation ) Warnings, error messages, and other outputs related to explicit failures Click Search . Select the Red Hat Quay product filter. Select the Knowledgebase content type filter. 1.3. Submitting a support case Prerequisites You have a Red Hat Customer Portal account. You have a Red Hat standard or premium Subscription. Procedure Log in to the Red Hat Customer Portal and select Open a support case . Select the Troubleshoot tab. For Summary , enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, continue to the following step. For Product , select Red Hat Quay . Select the version of Red Hat Quay that you are using. Click Continue . Optional. Drag and drop, paste, or browse to upload a file. This could be debug logs gathered from your Red Hat Quay deployment. Click Get support to file your ticket.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/troubleshooting_red_hat_quay/getting-support
25.3.3. Mapping Subchannels and Network Device Names
25.3.3. Mapping Subchannels and Network Device Names The DEVICE= option in the ifcfg file does not determine the mapping of subchannels to network device names. Instead, the udev rules file /etc/udev/rules.d/70-persistent-net.rules determines which network device channel gets which network device name. When configuring a new network device on System z, the system automatically adds a new rule to that file and assigns the unused device name. You can then edit the values assigned to the NAME= variable for each device. Example content of /etc/udev/rules.d/70-persistent-net.rules :
[ "This file was automatically generated by the /lib/udev/write_net_rules program run by the persistent-net-generator.rules rules file. # You can modify it,as long as you keep each rule on a single line. S/390 qeth device at 0.0.f5f0 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"qeth\", KERNELS==\"0.0.f5f0\", ATTR{type}==\"1\", KERNEL==\"eth*\", NAME=\"eth0\" S/390 ctcm device at 0.0.1000 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"ctcm\", KERNELS==\"0.0.1000\", ATTR{type}==\"256\", KERNEL==\"ctc*\", NAME=\"ctc0\" S/390 qeth device at 0.0.8024 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"qeth\", KERNELS==\"0.0.8024\", ATTR{type}==\"1\", KERNEL==\"hsi*\", NAME=\"hsi0\" S/390 qeth device at 0.0.8124 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"qeth\", KERNELS==\"0.0.8124\", ATTR{type}==\"1\", KERNEL==\"hsi*\", NAME=\"hsi1\" S/390 qeth device at 0.0.1017 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"qeth\", KERNELS==\"0.0.1017\", ATTR{type}==\"1\", KERNEL==\"eth*\", NAME=\"eth3\" S/390 qeth device at 0.0.8324 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"qeth\", KERNELS==\"0.0.8324\", ATTR{type}==\"1\", KERNEL==\"hsi*\", NAME=\"hsi3\" S/390 qeth device at 0.0.8224 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"qeth\", KERNELS==\"0.0.8224\", ATTR{type}==\"1\", KERNEL==\"hsi*\", NAME=\"hsi2\" S/390 qeth device at 0.0.1010 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"qeth\", KERNELS==\"0.0.1010\", ATTR{type}==\"1\", KERNEL==\"eth*\", NAME=\"eth2\" S/390 lcs device at 0.0.1240 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"lcs\", KERNELS==\"0.0.1240\", ATTR{type}==\"1\", KERNEL==\"eth*\", NAME=\"eth1\" S/390 qeth device at 0.0.1013 SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"qeth\", KERNELS==\"0.0.1013\", ATTR{type}==\"1\", KERNEL==\"hsi*\", NAME=\"hsi4\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ap-s390info-Adding_a_Network_Device-Mapping_Subchannels_and_network_device_names
6.4. Bridged Networking
6.4. Bridged Networking Bridged networking (also known as network bridging or virtual network switching) is used to place virtual machine network interfaces on the same network as the physical interface. Bridges require minimal configuration and make a virtual machine appear on an existing network, which reduces management overhead and network complexity. As bridges contain few components and configuration variables, they provide a transparent setup which is straightforward to understand and troubleshoot, if required. Bridging can be configured in a virtualized environment using standard Red Hat Enterprise Linux tools, virt-manager , or libvirt , and is described in the following sections. However, even in a virtualized environment, bridges may be more easily created using the host operating system's networking tools. More information about this bridge creation method can be found in the Red Hat Enterprise Linux 7 Networking Guide . 6.4.1. Configuring Bridged Networking on a Red Hat Enterprise Linux 7 Host Bridged networking can be configured for virtual machines on a Red Hat Enterprise Linux host, independent of the virtualization management tools. This configuration is mainly recommended when the virtualization bridge is the host's only network interface, or is the host's management network interface. For instructions on configuring network bridging without using virtualization tools, see the Red Hat Enterprise Linux 7 Networking Guide . 6.4.2. Bridged Networking with Virtual Machine Manager This section provides instructions on creating a bridge from a host machine's interface to a guest virtual machine using virt-manager . Note Depending on your environment, setting up a bridge with libvirt tools in Red Hat Enterprise Linux 7 may require disabling Network Manager, which is not recommended by Red Hat. A bridge created with libvirt also requires libvirtd to be running for the bridge to maintain network connectivity. It is recommended to configure bridged networking on the physical Red Hat Enterprise Linux host as described in the Red Hat Enterprise Linux 7 Networking Guide , while using libvirt after bridge creation to add virtual machine interfaces to the bridges. Procedure 6.1. Creating a bridge with virt-manager From the virt-manager main menu, click Edit ⇒ Connection Details to open the Connection Details window. Click the Network Interfaces tab. Click the + at the bottom of the window to configure a new network interface. In the Interface type drop-down menu, select Bridge , and then click Forward to continue. Figure 6.1. Adding a bridge In the Name field, enter a name for the bridge, such as br0 . Select a Start mode from the drop-down menu. Choose from one of the following: none - deactivates the bridge onboot - activates the bridge on the guest virtual machine reboot hotplug - activates the bridge even if the guest virtual machine is running Check the Activate now check box to activate the bridge immediately. To configure either the IP settings or Bridge settings , click the appropriate Configure button. A separate window will open to specify the required settings. Make any necessary changes and click OK when done. Select the physical interface to connect to your virtual machines. If the interface is currently in use by another guest virtual machine, you will receive a warning message. Click Finish and the wizard closes, taking you back to the Connections menu. Figure 6.2. Adding a bridge Select the bridge to use, and click Apply to exit the wizard. To stop the interface, click the Stop Interface key. Once the bridge is stopped, to delete the interface, click the Delete Interface key. 6.4.3. Bridged Networking with libvirt Depending on your environment, setting up a bridge with libvirt in Red Hat Enterprise Linux 7 may require disabling Network Manager, which is not recommended by Red Hat. This also requires libvirtd to be running for the bridge to operate. It is recommended to configure bridged networking on the physical Red Hat Enterprise Linux host as described in the Red Hat Enterprise Linux 7 Networking Guide . Important libvirt is now able to take advantage of new kernel tunable parameters to manage host bridge forwarding database (FDB) entries, thus potentially improving system network performance when bridging multiple virtual machines. Set the macTableManager attribute of a network's <bridge> element to 'libvirt' in the host's XML configuration file: This will turn off learning (flood) mode on all bridge ports, and libvirt will add or remove entries to the FDB as necessary. Along with removing the overhead of learning the proper forwarding ports for MAC addresses, this also allows the kernel to disable promiscuous mode on the physical device that connects the bridge to the network, which further reduces overhead.
[ "<bridge name='br0' macTableManager='libvirt'/>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Network_configuration-Bridged_networking
20.6. Understanding Password Expiration Controls
20.6. Understanding Password Expiration Controls When a user authenticates to Directory Server using a valid password, and if the password is expired, will expire soon, or needs to be reset, the server sends the following LDAP controls back to the client: Expired control ( 2.16.840.1.113730.3.4.4 ): Indicates that the password is expired. Directory Server sends this control in the following situations: The password is expired, and grace logins have been exhausted. The server rejects the bind with an Error 49 message. The password is expired, but grace logins are still available. The bind will be allowed. If passwordMustChange is enabled in the cn=config entry, and a user needs to reset the password after an administrator changed it. The bind is allowed, but any subsequent operation, other than changing the password, results in an Error 53 message. Expiring control ( 2.16.840.1.113730.3.4.5 ): Indicates that the password will expire soon. Directory Server sends this control in the following situations: The password will expire within the password warning period set in the passwordWarning attribute in the cn=config entry. If the password policy configuration option is enabled in the passwordSendExpiringTime attribute in the cn=config entry, the expiring control is always returned, regardless of whether the password is within the warning period. Bind response control ( 1.3.6.1.4.1.42.2.27.8.5.1 ): The control contains detailed information about the state of the password that is about to expire or will expire soon. Note Directory Server only sends the bind response control if the client requested it. For example, if you use ldapsearch , you must pass the -e ppolicy parameter to the command to request the bind response control. Example 20.1. Requesting the Bind Response Control in a Query If you request the bind response control, for example by passing the -e ppolicy parameter to the ldapsearch command, the server returns detailed information about account expiration. For example:
[ "ldapsearch -D \"uid= user_name ,dc=example,dc=com\" -xLLL -W -b \"dc=example,dc=com\" -e ppolicy ldap_bind: Success (0); Password expired (Password expired, 1 grace logins remain)" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/understanding_password_expiration_controls
User and group APIs
User and group APIs OpenShift Container Platform 4.16 Reference guide for user and group APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/user_and_group_apis/index
Configuring and managing replication
Configuring and managing replication Red Hat Directory Server 12 Replicating data to other Directory Server instances Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/index
Web Console Guide
Web Console Guide Migration Toolkit for Runtimes 1.2 Use the Migration Toolkit for Runtimes web console to group your applications into projects for analysis. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/web_console_guide/index
2.2. Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System
2.2. Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System This section describes the steps for installing the KVM hypervisor on an existing Red Hat Enterprise Linux 7 system. To install the packages, your machine must be registered and subscribed to the Red Hat Customer Portal. To register using Red Hat Subscription Manager, run the subscription-manager register command and follow the prompts. Alternatively, run the Red Hat Subscription Manager application from Applications System Tools on the desktop to register. If you do not have a valid Red Hat subscription, visit the Red Hat online store to obtain one. For more information on registering and subscribing a system to the Red Hat Customer Portal, see https://access.redhat.com/solutions/253273 . 2.2.1. Installing Virtualization Packages Manually To use virtualization on Red Hat Enterprise Linux, at minimum, you need to install the following packages: qemu-kvm : This package provides the user-level KVM emulator and facilitates communication between hosts and guest virtual machines. qemu-img : This package provides disk management for guest virtual machines. Note The qemu-img package is installed as a dependency of the qemu-kvm package. libvirt : This package provides the server and host-side libraries for interacting with hypervisors and host systems, and the libvirtd daemon that handles the library calls, manages virtual machines, and controls the hypervisor. To install these packages, enter the following command: Several additional virtualization management packages are also available and are recommended when using virtualization: virt-install : This package provides the virt-install command for creating virtual machines from the command line. libvirt-python : This package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. virt-manager : This package provides the virt-manager tool, also known as Virtual Machine Manager . This is a graphical tool for administering virtual machines. It uses the libvirt-client library as the management API. libvirt-client : This package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the virsh command-line tool to manage and control virtual machines and hypervisors from the command line or a special virtualization shell. You can install all of these recommended virtualization packages with the following command: 2.2.2. Installing Virtualization Package Groups The virtualization packages can also be installed from package groups. You can view the list of available groups by running the yum grouplist hidden commad. Out of the complete list of available package groups, the following table describes the virtualization package groups and what they provide. Table 2.1. Virtualization Package Groups Package Group Description Mandatory Packages Optional Packages Virtualization Hypervisor Smallest possible virtualization host installation libvirt, qemu-kvm, qemu-img qemu-kvm-tools Virtualization Client Clients for installing and managing virtualization instances gnome-boxes, virt-install, virt-manager, virt-viewer, qemu-img virt-top, libguestfs-tools, libguestfs-tools-c Virtualization Platform Provides an interface for accessing and controlling virtual machines and containers libvirt, libvirt-client, virt-who, qemu-img fence-virtd-libvirt, fence-virtd-multicast, fence-virtd-serial, libvirt-cim, libvirt-java, libvirt-snmp, perl-Sys-Virt Virtualization Tools Tools for offline virtual image management libguestfs, qemu-img libguestfs-java, libguestfs-tools, libguestfs-tools-c To install a package group, run the yum group install package_group command. For example, to install the Virtualization Tools package group with all the package types, run: For more information on installing package groups, see How to install a group of packages with yum on Red Hat Enterprise Linux? Knowledgebase article.
[ "yum install qemu-kvm libvirt", "yum install virt-install libvirt-python virt-manager virt-install libvirt-client", "yum group install \"Virtualization Tools\" --setopt=group_package_types=mandatory,default,optional" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Installing_the_virtualization_packages-Installing_virtualization_packages_on_an_existing_Red_Hat_Enterprise_Linux_system
Chapter 30. Configuring ethtool settings in NetworkManager connection profiles
Chapter 30. Configuring ethtool settings in NetworkManager connection profiles NetworkManager can configure certain network driver and hardware settings persistently. Compared to using the ethtool utility to manage these settings, this has the benefit of not losing the settings after a reboot. You can set the following ethtool settings in NetworkManager connection profiles: Offload features Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput. Interrupt coalesce settings By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput. Ring buffers These buffers store incoming and outgoing network packets. You can increase the ring buffer sizes to reduce a high packet drop rate. Channel settings A network interface manages its associated number of channels along with hardware settings and network drivers. All devices associated with a network interface communicate with each other through interrupt requests (IRQ). Each device queue holds pending IRQ and communicates with each other over a data line known as channel. Types of queues are associated with specific channel types. These channel types include: rx for receiving queues tx for transmit queues other for link interrupts or single root input/output virtualization (SR-IOV) coordination combined for hardware capacity-based multipurpose channels 30.1. Configuring an ethtool offload feature by using nmcli You can use NetworkManager to enable and disable ethtool offload features in a connection profile. Procedure For example, to enable the RX offload feature and disable TX offload in the enp1s0 connection profile, enter: This command explicitly enables RX offload and disables TX offload. To remove the setting of an offload feature that you previously enabled or disabled, set the feature's parameter to a null value. For example, to remove the configuration for TX offload, enter: Reactivate the network profile: Verification Use the ethtool -k command to display the current offload features of a network device: Additional resources nm-settings-nmcli(5) man page on your system 30.2. Configuring an ethtool offload feature by using the network RHEL system role Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput. You configure offload features in the connection profile of the network interface. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up The settings specified in the example playbook include the following: gro: no Disables Generic receive offload (GRO). gso: yes Enables Generic segmentation offload (GSO). tx_sctp_segmentation: no Disables TX stream control transmission protocol (SCTP) segmentation. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify the offload settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 30.3. Configuring an ethtool coalesce settings by using nmcli You can use NetworkManager to set ethtool coalesce settings in connection profiles. Procedure For example, to set the maximum number of received packets to delay to 128 in the enp1s0 connection profile, enter: To remove a coalesce setting, set it to a null value. For example, to remove the ethtool.coalesce-rx-frames setting, enter: To reactivate the network profile: Verification Use the ethtool -c command to display the current offload features of a network device: Additional resources nm-settings-nmcli(5) man page on your system 30.4. Configuring an ethtool coalesce settings by using the network RHEL system role By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput. You configure coalesce settings in the connection profile of the network interface. By using Ansible and the network RHEL role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up The settings specified in the example playbook include the following: rx_frames: <value> Sets the number of RX frames. gso: <value> Sets the number of TX frames. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the current offload features of the network device: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 30.5. Increasing the ring buffer size to reduce a high packet drop rate by using nmcli Increase the size of an Ethernet device's ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues. Receive ring buffers are shared between the device driver and network interface controller (NIC). The card assigns a transmit (TX) and receive (RX) ring buffer. As the name implies, the ring buffer is a circular buffer where an overflow overwrites existing data. There are two ways to move data from the NIC to the kernel, hardware interrupts and software interrupts, also called SoftIRQs. The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket. The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance. Procedure Display the packet drop statistics of the interface: Note that the output of the command depends on the network card and the driver. High values in discard or drop counters indicate that the available buffer fills up faster than the kernel can process the packets. Increasing the ring buffers can help to avoid such loss. Display the maximum ring buffer sizes: If the values in the Pre-set maximums section are higher than in the Current hardware settings section, you can change the settings in the steps. Identify the NetworkManager connection profile that uses the interface: Update the connection profile, and increase the ring buffers: To increase the RX ring buffer, enter: To increase the TX ring buffer, enter: Reload the NetworkManager connection: Important Depending on the driver your NIC uses, changing in the ring buffer can shortly interrupt the network connection. Additional resources ifconfig and ip commands report packet drops (Red Hat Knowledgebase) Should I be concerned about a 0.05% packet drop rate? (Red Hat Knowledgebase) ethtool(8) man page on your system 30.6. Increasing the ring buffer size to reduce a high packet drop rate by using the network RHEL system role Increase the size of an Ethernet device's ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues. Ring buffers are circular buffers where an overflow overwrites existing data. The network card assigns a transmit (TX) and receive (RX) ring buffer. Receive ring buffers are shared between the device driver and the network interface controller (NIC). Data can move from NIC to the kernel through either hardware interrupts or software interrupts, also called SoftIRQs. The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket. The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance. You configure ring buffer settings in the NetworkManager connection profiles. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You know the maximum ring buffer sizes that the device supports. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up The settings specified in the example playbook include the following: rx: <value> Sets the maximum number of received ring buffer entries. tx: <value> Sets the maximum number of transmitted ring buffer entries. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the maximum ring buffer sizes: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 30.7. Configuring an ethtool channels settings by using nmcli By using NetworkManager, you can manage network devices and connections. The ethtool utility manages the link speed and related settings of a network interface card. ethtool handles IRQ based communication with associated devices to manage related channels settings in connection profiles. Procedure Display the channels associated with a network device: Update the channel settings of a network interface: Reactivate the network profile: Verification Check the updated channel settings associated with the network device: Additional resources nmcli(5) man page on your system
[ "nmcli con modify enp1s0 ethtool.feature-rx on ethtool.feature-tx off", "nmcli con modify enp1s0 ethtool.feature-tx \"\"", "nmcli connection up enp1s0", "ethtool -k network_device", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_enp1s0\": { \"active\": true, \"device\": \"enp1s0\", \"features\": { \"rx_gro_hw\": \"off, \"tx_gso_list\": \"on, \"tx_sctp_segmentation\": \"off\", }", "nmcli connection modify enp1s0 ethtool.coalesce-rx-frames 128", "nmcli connection modify enp1s0 ethtool.coalesce-rx-frames \"\"", "nmcli connection up enp1s0", "ethtool -c network_device", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -c enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> rx-frames: 128 tx-frames: 128", "ethtool -S enp1s0 rx_queue_0_drops: 97326 rx_queue_1_drops: 63783", "ethtool -g enp1s0 Ring parameters for enp1s0 : Pre-set maximums: RX: 4096 RX Mini: 0 RX Jumbo: 16320 TX: 4096 Current hardware settings: RX: 255 RX Mini: 0 RX Jumbo: 0 TX: 255", "nmcli connection show NAME UUID TYPE DEVICE Example-Connection a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0", "nmcli connection modify Example-Connection ethtool.ring-rx 4096", "nmcli connection modify Example-Connection ethtool.ring-tx 4096", "nmcli connection up Example-Connection", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -g enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096", "ethtool --show-channels enp1s0 Channel parameters for enp1s0 : Pre-set maximums: RX: 4 TX: 3 Other: 10 Combined: 63 Current hardware settings: RX: 1 TX: 1 Other: 1 Combined: 1", "nmcli connection modify enp1s0 ethtool.channels-rx 4 ethtool.channels-tx 3 ethtools.channels-other 9 ethtool.channels-combined 50", "nmcli connection up enp1s0", "ethtool --show-channels enp1s0 Channel parameters for enp1s0 : Pre-set maximums: RX: 4 TX: 3 Other: 10 Combined: 63 Current hardware settings: RX: 4 TX: 3 Other: 9 Combined: 50" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/configuring-ethtool-settings-in-networkmanager-connection-profiles_configuring-and-managing-networking
12.5.4. Creating an iSCSI-based Storage Pool with virsh
12.5.4. Creating an iSCSI-based Storage Pool with virsh Use pool-define-as to define the pool from the command line Storage pool definitions can be created with the virsh command line tool. Creating storage pools with virsh is useful for systems administrators using scripts to create multiple storage pools. The virsh pool-define-as command has several parameters which are accepted in the following format: The parameters are explained as follows: type defines this pool as a particular type, iscsi for example name must be unique and sets the name for the storage pool source-host and source-path the host name and iSCSI IQN respectively source-dev and source-name these parameters are not required for iSCSI-based pools, use a - character to leave the field blank. target defines the location for mounting the iSCSI device on the host physical machine The example below creates the same iSCSI-based storage pool as the step. Verify the storage pool is listed Verify the storage pool object is created correctly and the state reports as inactive . Start the storage pool Use the virsh command pool-start for this. pool-start enables a directory storage pool, allowing it to be used for volumes and guest virtual machines. Turn on autostart Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts. Verify that the iscsirhel6guest pool has autostart set: Verify the storage pool configuration Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running . An iSCSI-based storage pool is now available.
[ "virsh pool-define-as name type source-host source-path source-dev source-name target", "virsh pool-define-as --name scsirhel6guest --type iscsi --source-host server1.example.com --source-dev iqn.2010-05.com.example.server1:iscsirhel6guest --target /dev/disk/by-path Pool iscsirhel6guest defined", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes iscsirhel6guest inactive no", "virsh pool-start guest_images_disk Pool guest_images_disk started virsh pool-list --all Name State Autostart ----------------------------------------- default active yes iscsirhel6guest active no", "virsh pool-autostart iscsirhel6guest Pool iscsirhel6guest marked as autostarted", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes iscsirhel6guest active yes", "virsh pool-info iscsirhel6guest Name: iscsirhel6guest UUID: afcc5367-6770-e151-bcb3-847bc36c5e28 State: running Persistent: unknown Autostart: yes Capacity: 100.31 GB Allocation: 0.00 Available: 100.31 GB" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-storage_pools-creating-iscsi-adding_target_virsh
Chapter 2. Red Hat Ansible Automation Platform components
Chapter 2. Red Hat Ansible Automation Platform components Ansible Automation Platform is composed of services that are connected together to meet your automation needs. These services provide the ability to store, make decisions for, and execute automation. All of these functions are available through a user interface (UI) and RESTful application programming interface (API). Deploy each of the following components so that all features and capabilities are available for use without the need to take further action: Platform gateway Automation hub Private automation hub High availability automation hub Event-Driven Ansible controller Automation mesh Automation execution environments Ansible Galaxy Automation content navigator 2.1. Platform gateway Platform gateway is the service that handles authentication and authorization for the Ansible Automation Platform. It provides a single entry into the Ansible Automation Platform and serves the platform user interface so you can authenticate and access all of the Ansible Automation Platform services from a single location. For more information about the services available in the Ansible Automation Platform, refer to Key functionality and concepts in Getting started with Ansible Automation Platform . The platform gateway includes an activity stream that captures changes to gateway resources, such as the creation or modification of organizations, users, and service clusters, among others. For each change, the activity stream collects information about the time of the change, the user that initiated the change, the action performed, and the actual changes made to the object, when possible. The information gathered varies depending on the type of change. You can access the details captured by the activity stream from the API: /api/gateway/v1/activitystream/ 2.2. Ansible automation hub Ansible automation hub is a repository for certified content of Ansible Content Collections. It is the centralized repository for Red Hat and its partners to publish content, and for customers to discover certified, supported Ansible Content Collections. Red Hat Ansible Certified Content provides users with content that has been tested and is supported by Red Hat. 2.3. Private automation hub Private automation hub provides both disconnected and on-premise solutions for synchronizing content. You can synchronize collections and execution environment images from Red Hat cloud automation hub, storing and serving your own custom automation collections and execution images. You can also use other sources such as Ansible Galaxy or other container registries to provide content to your private automation hub. Private automation hub can integrate into your enterprise directory and your CI/CD pipelines. 2.4. High availability automation hub A high availability (HA) configuration increases reliability and scalablility for automation hub deployments. HA deployments of automation hub have multiple nodes that concurrently run the same service with a load balancer distributing workload (an "active-active" configuration). This configuration eliminates single points of failure to minimize service downtime and allows you to easily add or remove nodes to meet workload demands. 2.5. Event-Driven Ansible controller The Event-Driven Ansible controller is the interface for event-driven automation and introduces automated resolution of IT requests. Event-Driven Ansible controller helps you connect to sources of events and act on those events by using rulebooks. This technology improves IT speed and agility, and enables consistency and resilience. With Event-Driven Ansible, you can: Automate decision making Use many event sources Implement event-driven automation within and across many IT use cases Additional resources Using automation decisions 2.6. Automation mesh Automation mesh is an overlay network intended to ease the distribution of work across a large and dispersed collection of workers through nodes that establish peer-to-peer connections with each other using existing networks. Automation mesh provides: Dynamic cluster capacity that scales independently, allowing you to create, register, group, ungroup and deregister nodes with minimal downtime. Control and execution plane separation that enables you to scale playbook execution capacity independently from control plane capacity. Deployment choices that are resilient to latency, reconfigurable without outage, and that dynamically re-reroute to choose a different path when outages exist. Mesh routing changes. Connectivity that includes bi-directional, multi-hopped mesh communication possibilities which are Federal Information Processing Standards (FIPS) compliant. 2.7. Automation execution environments Automation execution environments are container images on which all automation in Red Hat Ansible Automation Platform is run. They provide a solution that includes the Ansible execution engine and hundreds of modules that help users automate all aspects of IT environments and processes. Automation execution environments automate commonly used operating systems, infrastructure platforms, network devices, and clouds. 2.8. Ansible Galaxy Ansible Galaxy is a hub for finding, reusing, and sharing Ansible content. Community-provided Galaxy content, in the form of prepackaged roles, can help start automation projects. Roles for provisioning infrastructure, deploying applications, and completing other tasks can be dropped into Ansible Playbooks and be applied immediately to customer environments. 2.9. Automation content navigator Automation content navigator is a textual user interface (TUI) that becomes the primary command line interface into the automation platform, covering use cases from content building, running automation locally in an execution environment, running automation in Ansible Automation Platform, and providing the foundation for future integrated development environments (IDEs).
[ "/api/gateway/v1/activitystream/" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/planning_your_installation/ref-aap-components
Chapter 6. Camel K trait configuration reference
Chapter 6. Camel K trait configuration reference This chapter provides reference information about advanced features and core capabilities that you can configure on the command line at runtime using traits . Camel K provides feature traits to configure specific features and technologies. Camel K provides platform traits to configure internal Camel K core capabilities. Important The Red Hat Integration - Camel K 1.6 includes the OpenShift and Knative profiles. The Kubernetes profile has community-only support. It also includes Java, and YAML DSL support for integrations. Other languages such as XML, Groovy, JavaScript, and Kotlin have community-only support. This chapter includes the following sections: Camel K feature traits Section 6.2.2, "Knative Trait" - Technology Preview Section 6.2.3, "Knative Service Trait" - Technology Preview Section 6.2.9, "Prometheus Trait" Section 6.2.10, "Pdb Trait" Section 6.2.11, "Pull Secret Trait" Section 6.2.12, "Route Trait" Section 6.2.13, "Service Trait" Camel K core platform traits Section 6.3.1, "Builder Trait" Section 6.3.3, "Camel Trait" Section 6.3.2, "Container Trait" Section 6.3.4, "Dependencies Trait" Section 6.3.5, "Deployer Trait" Section 6.3.6, "Deployment Trait" Section 6.3.7, "Environment Trait" Section 6.3.8, "Error Handler Trait" Section 6.3.9, "Jvm Trait" Section 6.3.10, "Kamelets Trait" Section 6.3.11, "NodeAffinity Trait" Section 6.3.12, "Openapi Trait" - Technology Preview Section 6.3.13, "Owner Trait" Section 6.3.14, "Platform Trait" Section 6.3.15, "Quarkus Trait" 6.1. Camel K trait and profile configuration This section explains the important Camel K concepts of traits and profiles , which are used to configure advanced Camel K features at runtime. Camel K traits Camel K traits are advanced features and core capabilities that you can configure on the command line to customize Camel K integrations. For example, this includes feature traits that configure interactions with technologies such as 3scale API Management, Quarkus, Knative, and Prometheus. Camel K also provides internal platform traits that configure important core platform capabilities such as Camel support, containers, dependency resolution, and JVM support. Camel K profiles Camel K profiles define the target cloud platforms on which Camel K integrations run. Supported profiles are OpenShift and Knative profiles. Note When you run an integration on OpenShift, Camel K uses the Knative profile when OpenShift Serverless is installed on the cluster. Camel K uses the OpenShift profile when OpenShift Serverless is not installed. You can also specify the profile at runtime using the kamel run --profile option. Camel K provides useful defaults for all traits, taking into account the target profile on which the integration runs. However, advanced users can configure Camel K traits for custom behavior. Some traits only apply to specific profiles such as OpenShift or Knative . For more details, see the available profiles in each trait description. Camel K trait configuration Each Camel trait has a unique ID that you can use to configure the trait on the command line. For example, the following command disables creating an OpenShift Service for an integration: kamel run --trait service.enabled=false my-integration.yaml You can also use the -t option to specify traits. Camel K trait properties You can use the enabled property to enable or disable each trait. All traits have their own internal logic to determine if they need to be enabled when the user does not activate them explicitly. Warning Disabling a platform trait may compromise the platform functionality. Some traits have an auto property, which you can use to enable or disable automatic configuration of the trait based on the environment. For example, this includes traits such as 3scale, Cron, and Knative. This automatic configuration can enable or disable the trait when the enabled property is not explicitly set, and can change the trait configuration. Most traits have additional properties that you can configure on the command line. For more details, see the descriptions for each trait in the sections that follow. 6.2. Camel K feature traits 6.2.1. Health Trait The health trait is responsible for configuring the health probes on the integration container. It is disabled by default. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.1.1. Configuration Trait properties are specified when running any integration by using the following CLI. USD kamel run --trait health.[key]=[value] --trait health.[key2]=[value2] integration.java The following configuration options are available. Property Type Description health.enabled bool Can be used to enable or disable a trait. All traits share this common property. health.liveness-probe-enabled bool Configures the liveness probe for the integration container (default false ). health.liveness-scheme string Scheme to use when connecting to the liveness probe (default HTTP ). health.liveness-initial-delay int32 Number of seconds after the container has started before the liveness probe is initiated. health.liveness-timeout int32 Number of seconds after which the liveness probe times out. health.liveness-period int32 How often to perform the liveness probe. health.liveness-success-threshold int32 Minimum consecutive successes for the liveness probe to be considered successful after having failed. health.liveness-failure-threshold int32 Minimum consecutive failures for the liveness probe to be considered failed after having succeeded. health.readiness-probe-enabled bool Configures the readiness probe for the integration container (default true ). health.readiness-scheme string Scheme to use when connecting to the readiness probe (default HTTP ). health.readiness-initial-delay int32 Number of seconds after the container has started before the readiness probe is initiated. health.readiness-timeout int32 Number of seconds after which the readiness probe times out. health.readiness-period int32 How often to perform the readiness probe. health.readiness-success-threshold int32 Minimum consecutive successes for the readiness probe to be considered successful after having failed. health.readiness-failure-threshold int32 Minimum consecutive failures for the readiness probe to be considered failed after having succeeded. health.startup-probe-enabled bool Configures the startup probe for the integration container (default false ). health.startup-scheme string Scheme to use when connecting to the startup probe (default HTTP ). health.startup-initial-delay int32 Number of seconds after the container has started before the startup probe is initiated. health.startup-timeout int32 Number of seconds after which the startup probe times out. health.startup-period int32 How often to perform the startup probe. health.startup-success-threshold int32 Minimum consecutive successes for the startup probe to be considered successful after having failed. health.startup-failure-threshold int32 Minimum consecutive failures for the startup probe to be considered failed after having succeeded. 6.2.2. Knative Trait The Knative trait automatically discovers addresses of Knative resources and inject them into the running integration. The full Knative configuration is injected in the CAMEL_KNATIVE_CONFIGURATION in JSON format. The Camel Knative component will then use the full configuration to configure the routes. The trait is enabled by default when the Knative profile is active. This trait is available in the following profiles: Knative . 6.2.2.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait knative.[key]=[value] --trait knative.[key2]=[value2] integration.java The following configuration options are available: Property Type Description knative.enabled bool Can be used to enable or disable a trait. All traits share this common property. knative.configuration string Can be used to inject a Knative complete configuration in JSON format. knative.channel-sources []string List of channels used as source of integration routes. Can contain simple channel names or full Camel URIs. knative.channel-sinks []string List of channels used as destination of integration routes. Can contain simple channel names or full Camel URIs. knative.endpoint-sources []string List of channels used as source of integration routes. knative.endpoint-sinks []string List of endpoints used as destination of integration routes. Can contain simple endpoint names or full Camel URIs. knative.event-sources []string List of event types that the integration will be subscribed to. Can contain simple event types or full Camel URIs (to use a specific broker different from "default"). knative.event-sinks []string List of event types that the integration will produce. Can contain simple event types or full Camel URIs (to use a specific broker). knative.filter-source-channels bool Enables filtering on events based on the header "ce-knativehistory". Since this header has been removed in newer versions of Knative, filtering is disabled by default. knative.sink-binding bool Allows binding the integration to a sink via a Knative SinkBinding resource. This can be used when the integration targets a single sink. It's enabled by default when the integration targets a single sink (except when the integration is owned by a Knative source). knative.auto bool Enable automatic discovery of all trait properties. 6.2.3. Knative Service Trait The Knative Service trait allows to configure options when running the integration as Knative service instead of a standard Kubernetes Deployment. Running integrations as Knative Services adds auto-scaling (and scaling-to-zero) features, but those features are only meaningful when the routes use a HTTP endpoint consumer. This trait is available in the following profiles: Knative . 6.2.3.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait knative-service.[key]=[value] --trait knative-service.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description knative-service.enabled bool Can be used to enable or disable a trait. All traits share this common property. knative-service.annotations map[string]string The annotations are added to route. This can be used to set knative service specific annotations. For more details see, Route Specific Annotations . CLI usage example: -t "knative-service.annotations.'haproxy.router.openshift.io/balance'=roundrobin" knative-service.autoscaling-class string Configures the Knative autoscaling class property (e.g. to set hpa.autoscaling.knative.dev or kpa.autoscaling.knative.dev autoscaling). Refer to the Knative documentation for more information. knative-service.autoscaling-metric string Configures the Knative autoscaling metric property (e.g. to set concurrency based or cpu based autoscaling). Refer to the Knative documentation for more information. knative-service.autoscaling-target int Sets the allowed concurrency level or CPU percentage (depending on the autoscaling metric) for each Pod. Refer to the Knative documentation for more information. knative-service.min-scale int The minimum number of Pods that should be running at any time for the integration. It's zero by default, meaning that the integration is scaled down to zero when not used for a configured amount of time. Refer to the Knative documentation for more information. knative-service.max-scale int An upper bound for the number of Pods that can be running in parallel for the integration. Knative has its own cap value that depends on the installation. Refer to the Knative documentation for more information. knative-service.auto bool Automatically deploy the integration as Knative service when all conditions hold: Integration is using the Knative profile All routes are either starting from a HTTP based consumer or a passive consumer (e.g. direct is a passive consumer) 6.2.4. Logging Trait The Logging trait is used to configure Integration runtime logging options (such as color and format). The logging backend is provided by Quarkus, whose configuration is documented at https://quarkus.io/guides/logging . This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.4.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait logging.[key]=[value] --trait logging.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description logging.enabled bool Can be used to enable or disable a trait. All traits share this common property. logging.color bool Colorize the log output logging.format string Logs message format logging.level string Adjust the logging level (defaults to INFO) logging.json bool Output the logs in JSON logging.json-pretty-print bool Enable "pretty printing" of the JSON logs 6.2.5. Master Trait The Master trait allows to configure the integration to automatically leverage Kubernetes resources for leader election and starting master routes only on certain instances. It is activated automatically when using the master endpoint in a route. For example: from("master:lockname:telegram:bots")... . Note This trait adds special permissions to the integration service account to read/write configmaps and read pods. It is recommended to use a different service account than "default" when running the integration. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.5.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait master.[key]=[value] --trait master.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description master.enabled bool Can be used to enable or disable a trait. All traits share this common property. master.auto bool Enables automatic configuration of the trait. master.include-delegate-dependencies bool When this flag is active, the operator analyzes the source code to add dependencies required by delegate endpoints. For example: when using master:lockname:timer , then camel:timer is automatically added to the set of dependencies. It is enabled by default. master.resource-name string Name of the configmap/lease resource that will be used to store the lock. Defaults to "<integration-name>-lock". master.resource-type string Type of Kubernetes resource to use for locking ("ConfigMap" or "Lease"). Defaults to "Lease". master.label-key string Label that will be used to identify all pods contending the lock. Defaults to "camel.apache.org/integration". master.label-value string Label value that will be used to identify all pods contending the lock. Defaults to the integration name. 6.2.6. Mount Trait The Mount trait can be used to configure volumes mounted on the Integration Pods. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Note The mount trait is a platform trait and cannot be disabled by the user. 6.2.6.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait mount.[key]=[value] --trait mount.[key2]=[value2] integration.java The following configuration options are available: Property Type Description mount.enabled bool Deprecated: no longer in use. mount.configs []string A list of configuration pointing to configmap/secret. The configuration are expected to be UTF-8 resources as they are processed by runtime Camel Context and tried to be parsed as property files. They are also made available on the classpath in order to ease their usage directly from the Route. mount.resources []string A list of resources (text or binary content) pointing to configmap/secret. The resources are expected to be any resource type (text or binary content). The destination path can be either a default location or any path specified by the user. mount.volumes []string A list of Persistent Volume Claims to be mounted. Syntax: mount.hot-reload bool Enable "hot reload" when a secret/configmap mounted is edited (default false ) Note The syntax for mount.configs property is, Syntax: [configmap | secret]:name[/key] , where name represents the resource name and key optionally represents the resource key to be filtered. The syntax for mount.resources property is, Syntax: [configmap | secret]:name[/key] [@path] , where name represents the resource name, key optionally represents the resource key to be filtered and path represents the destination path. 6.2.7. Telemetry Trait Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The Telemetry trait can be used to automatically publish tracing information to an OTLP compatible collector. The trait is able to automatically discover the telemetry OTLP endpoint available in the namespace (supports Jaerger in version 1.35+). The Telemetry trait is disabled by default. Warning The Telemetry trait cannot be enabled at the same time as the Tracing trait. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.7.1. Configuration Trait properties can be specified when running any integration with the CLI. USD kamel run --trait telemetry.[key]=[value] --trait telemetry.[key2]=[value2] integration.java The following configuration options are available: Property Type Description telemetry.enabled bool Can be used to enable or disable a trait. All traits share this common property. telemetry.auto bool Enables automatic configuration of the trait, including automatic discovery of the telemetry endpoint. telemetry.service-name string The name of the service that publishes telemetry data (defaults to the integration name) telemetry.endpoint string The target endpoint of the Telemetry service (automatically discovered by default) telemetry.sampler string The sampler of the telemetry used for tracing (default "on") telemetry.sampler-ratio string The sampler ratio of the telemetry used for tracing telemetry.sampler-parent-based bool The sampler of the telemetry used for tracing is parent based (default "true") 6.2.7.2. Examples To activate tracing to a deployed OTLP API Jaeger through discovery: USD kamel run -t telemetry.enable=true ... To define a specific deployed OTLP gRPC reciever: USD kamel run -t telemetry.enable=true -t telemetry.endpoint=http://instance-collector:4317 ... To define another sampler service name: USD kamel run -t telemetry.enable=true -t telemetry.service-name=tracer_myintegration ... To use a ratio sampler with a sampling ratio of 1 to every 1,000 : USD kamel run -t telemetry.enable=true -t telemetry.sampler=ratio -t telemetry.sampler-ratio=0.001 ... 6.2.8. Pod Trait The pod trait allows the customization of the Integration pods. It applies the PodSpecTemplate struct contained in the Integration .spec.podTemplate field, into the Integration deployment Pods template, using strategic merge patch. This is used to customize the container where Camel routes execute, by using the integration container name. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Note 1 : In the current implementation, template options override the configuration options defined by using CLI. For example: USD kamel run Integration.java --pod-template template.yaml --env TEST_VARIABLE=will_be_overriden --env ANOTHER_VARIABLE=Im_There The value from the template overwrites the TEST_VARIABLE environment variable, while ANOTHER_VARIABLE is unchanged. Note 2: Changes to the integration container entrypoint are not applied due to current trait execution order. 6.2.8.1. Configuration Trait properties are specified when running any integration by using the CLI. USD kamel run --trait pod.[key]=[value] integration.java The following configuration options are available. Property Type Description pod.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.2.8.2. Sidecar Containers Example with the following Integration, that reads files from a directory: Integration.groovy from('file:///var/log') .convertBodyTo(String.class) .setBody().simple('USD{body}: {{TEST_VARIABLE}} ') .log('USD{body}') In addition, the following Pod template adds a sidecar container to the Integration Pod, generating some data into the directory, and mounts it into the integration container. template.yaml containers: - name: integration env: - name: TEST_VARIABLE value: "hello from the template" volumeMounts: - name: var-logs mountPath: /var/log - name: sidecar image: busybox command: [ "/bin/sh" , "-c", "while true; do echo USD(date -u) 'Content from the sidecar container' > /var/log/file.txt; sleep 1;done" ] volumeMounts: - name: var-logs mountPath: /var/log volumes: - name: var-logs emptyDir: { } The Integration route logs the content of the file generated by the sidecar container. Example: USD kamel run Integration.java --pod-template template.yaml ... Condition "Ready" is "True" for Integration integration [1] 2021-04-30 07:40:03,136 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:02 UTC 2021 Content from the sidecar container [1] : hello from the template [1] 2021-04-30 07:40:04,140 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:03 UTC 2021 Content from the sidecar container [1] : hello from the template [1] 2021-04-30 07:40:05,142 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:04 UTC 2021 Content from the sidecar container [1] : hello from the template 6.2.8.3. Init Containers With this trait you are able to run initContainers. To run the initContainers, you must do the following. Include at least one container in the template spec. Provide the configuration for the default container, which is integration. Following is a simple example. template.yaml containers: - name: integration initContainers: - name: init image: busybox command: [ "/bin/sh" , "-c", "echo 'hello'!" ] The integration container is overwritten by the container running the route, and the initContainer runs before the route as expected. 6.2.9. Prometheus Trait The Prometheus trait configures a Prometheus-compatible endpoint. It also creates a PodMonitor resource, so that the endpoint can be scraped automatically, when using the Prometheus operator. The metrics are exposed using MicroProfile Metrics. Warning The creation of the PodMonitor resource requires the Prometheus Operator custom resource definition to be installed. You can set pod-monitor to false for the Prometheus trait to work without the Prometheus Operator. The Prometheus trait is disabled by default. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.9.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait prometheus.[key]=[value] --trait prometheus.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description prometheus.enabled bool Can be used to enable or disable a trait. All traits share this common property. prometheus.pod-monitor bool Whether a PodMonitor resource is created (default true ). prometheus.pod-monitor-labels []string The PodMonitor resource labels, applicable when pod-monitor is true . 6.2.10. Pdb Trait The PDB trait allows to configure the PodDisruptionBudget resource for the Integration pods. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.10.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait pdb.[key]=[value] --trait pdb.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description pdb.enabled bool Can be used to enable or disable a trait. All traits share this common property. pdb.min-available string The number of pods for the Integration that must still be available after an eviction. It can be either an absolute number or a percentage. Only one of min-available and max-unavailable can be specified. pdb.max-unavailable string The number of pods for the Integration that can be unavailable after an eviction. It can be either an absolute number or a percentage (default 1 if min-available is also not set). Only one of max-unavailable and min-available can be specified. 6.2.11. Pull Secret Trait The Pull Secret trait sets a pull secret on the pod, to allow Kubernetes to retrieve the container image from an external registry. The pull secret can be specified manually or, in case you've configured authentication for an external container registry on the IntegrationPlatform , the same secret is used to pull images. It's enabled by default whenever you configure authentication for an external container registry, so it assumes that external registries are private. If your registry does not need authentication for pulling images, you can disable this trait. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.11.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait pull-secret.[key]=[value] --trait pull-secret.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description pull-secret.enabled bool Can be used to enable or disable a trait. All traits share this common property. pull-secret.secret-name string The pull secret name to set on the Pod. If left empty this is automatically taken from the IntegrationPlatform registry configuration. pull-secret.image-puller-delegation bool When using a global operator with a shared platform, this enables delegation of the system:image-puller cluster role on the operator namespace to the integration service account. pull-secret.auto bool Automatically configures the platform registry secret on the pod if it is of type kubernetes.io/dockerconfigjson . 6.2.12. Route Trait The Route trait can be used to configure the creation of OpenShift routes for the integration. The certificate and key contents may be sourced either from the local filesystem or in a Openshift secret object. The user may use the parameters ending in -secret (example: tls-certificate-secret ) to reference a certificate stored in a secret . Parameters ending in -secret have higher priorities and in case the same route parameter is set, for example: tls-key-secret and tls-key , then tls-key-secret is used. The recommended approach to set the key and certificates is to use secrets to store their contents and use the following parameters to reference them: tls-certificate-secret , tls-key-secret , tls-ca-certificate-secret , tls-destination-ca-certificate-secret See the examples section at the end of this page to see the setup options. This trait is available in the following profiles: OpenShift . 6.2.12.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait route.[key]=[value] --trait route.[key2]=[value2] integration.java The following configuration options are available: Property Type Description route.enabled bool Can be used to enable or disable a trait. All traits share this common property. route.annotations map[string]string The annotations are added to route. This can be used to set route specific annotations. For annotations options see Route Specific Annotations . CLI usage example: -t "route.annotations.'haproxy.router.openshift.io/balance'=roundrobin route.host string To configure the host exposed by the route. route.tls-termination string The TLS termination type, like edge , passthrough or reencrypt . Refer to the OpenShift route documentation for additional information. route.tls-certificate string The TLS certificate contents. Refer to the OpenShift route documentation for additional information. route.tls-certificate-secret string The secret name and key reference to the TLS certificate. The format is "secret-name[/key-name]", the value represents the secret name, if there is only one key in the secret it will be read, otherwise you can set a key name separated with a "/". Refer to the OpenShift route documentation for additional information. route.tls-key string The TLS certificate key contents. Refer to the OpenShift route documentation for additional information. route.tls-key-secret string The secret name and key reference to the TLS certificate key. The format is "secret-name[/key-name]", the value represents the secret name, if there is only one key in the secret it will be read, otherwise you can set a key name separated with a "/". Refer to the OpenShift route documentation for additional information. route.tls-ca-certificate string The TLS CA certificate contents. Refer to the OpenShift route documentation for additional information. route.tls-ca-certificate-secret string The secret name and key reference to the TLS CA certificate. The format is "secret-name[/key-name]", the value represents the secret name, if there is only one key in the secret it will be read, otherwise you can set a key name separated with a "/". Refer to the OpenShift route documentation for additional information. route.tls-destination-ca-certificate string The destination CA certificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify. Refer to the OpenShift route documentation for additional information. route.tls-destination-ca-certificate-secret string The secret name and key reference to the destination CA certificate. The format is "secret-name[/key-name]", the value represents the secret name, if there is only one key in the secret it will be read, otherwise you can set a key name separated with a "/". Refer to the OpenShift route documentation for additional information. route.tls-insecure-edge-termination-policy string To configure how to deal with insecure traffic, e.g. Allow , Disable or Redirect traffic. Refer to the OpenShift route documentation for additional information. 6.2.12.2. Examples These examples uses secrets to store the certificates and keys to be referenced in the integrations. Read Openshift route documentation for detailed information about routes. The PlatformHttpServer.java is the integration example. As a requirement to run these examples, you should have a secret with a key and certificate. 6.2.12.2.1. Generate a self-signed certificate and create a secret openssl genrsa -out tls.key openssl req -new -key tls.key -out csr.csr -subj "/CN=my-server.com" openssl x509 -req -in csr.csr -signkey tls.key -out tls.crt oc create secret tls my-combined-certs --key=tls.key --cert=tls.crt 6.2.12.2.2. Making an HTTP request to the route For all examples, you can use the following curl command to make an HTTP request. It makes use of inline scripts to retrieve the openshift namespace and cluster base domain, if you are using a shell which doesn't support these inline scripts, you should replace the inline scripts with the values of your actual namespace and base domain. curl -k https://platform-http-server-`oc config view --minify -o 'jsonpath={..namespace}'`.`oc get dnses/cluster -ojsonpath='{.spec.baseDomain}'`/hello?name=Camel-K To add an edge route using secrets, use the parameters ending in -secret to set the secret name which contains the certificate. This route example trait references a secret named my-combined-certs which contains two keys named tls.key and tls.crt . kamel run --dev PlatformHttpServer.java -t route.tls-termination=edge -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key To add a passthrough route using secrets, the TLS is setup in the integration pod, the keys and certificates should be visible in the running integration pod, to achieve this we are using the --resource kamel parameter to mount the secret in the integration pod, then we use some camel quarkus parameters to reference these certificate files in the running pod, they start with -p quarkus.http.ssl.certificate . This route example trait references a secret named my-combined-certs which contains two keys named tls.key and tls.crt . kamel run --dev PlatformHttpServer.java --resource secret:my-combined-certs@/etc/ssl/my-combined-certs -p quarkus.http.ssl.certificate.file=/etc/ssl/my-combined-certs/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/my-combined-certs/tls.key -t route.tls-termination=passthrough -t container.port=8443 To add a reencrypt route using secrets, the TLS is setup in the integration pod, the keys and certificates should be visible in the running integration pod, to achieve this we are using the --resource kamel parameter to mount the secret in the integration pod, then we use some camel quarkus parameters to reference these certificate files in the running pod, they start with -p quarkus.http.ssl.certificate . This route example trait references a secret named my-combined-certs which contains two keys named tls.key and tls.crt . kamel run --dev PlatformHttpServer.java --resource secret:my-combined-certs@/etc/ssl/my-combined-certs -p quarkus.http.ssl.certificate.file=/etc/ssl/my-combined-certs/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/my-combined-certs/tls.key -t route.tls-termination=reencrypt -t route.tls-destination-ca-certificate-secret=my-combined-certs/tls.crt -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key -t container.port=8443 To add a reencrypt route using a specific certificate from a secret for the route and Openshift service serving certificates for the integration endpoint. This way the Openshift service serving certificates is set up only in the integration pod. The keys and certificates should be visible in the running integration pod, to achieve this we are using the --resource kamel parameter to mount the secret in the integration pod, then we use some camel quarkus parameters to reference these certificate files in the running pod, they start with -p quarkus.http.ssl.certificate . This route example trait references a secret named my-combined-certs which contains two keys named tls.key and tls.crt . kamel run --dev PlatformHttpServer.java --resource secret:cert-from-openshift@/etc/ssl/cert-from-openshift -p quarkus.http.ssl.certificate.file=/etc/ssl/cert-from-openshift/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/cert-from-openshift/tls.key -t route.tls-termination=reencrypt -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key -t container.port=8443 Then you should annotate the integration service to inject the Openshift service serving certificates oc annotate service platform-http-server service.beta.openshift.io/serving-cert-secret-name=cert-from-openshift To add an edge route using a certificate and a private key provided from your local filesystem. This example uses inline scripts to read the certificate and private key file contents, then remove all new line characters, (this is required to set the certificate as parameter's values), so the values are in a single line. kamel run PlatformHttpServer.java --dev -t route.tls-termination=edge -t route.tls-certificate="USD(cat tls.crt|awk 'NF {sub(/\r/, ""); printf "%s\\n",USD0;}')" -t route.tls-key="USD(cat tls.key|awk 'NF {sub(/\r/, ""); printf "%s\\n",USD0;}')" 6.2.13. Service Trait The Service trait exposes the integration with a Service resource so that it can be accessed by other applications (or integrations) in the same namespace. It's enabled by default if the integration depends on a Camel component that can expose a HTTP endpoint. This trait is available in the following profiles: Kubernetes, OpenShift . 6.2.13.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait service.[key]=[value] --trait service.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description service.enabled bool Can be used to enable or disable a trait. All traits share this common property. service.auto bool To automatically detect from the code if a Service needs to be created. service.node-port bool Enable Service to be exposed as NodePort (default false ). 6.3. Camel K platform traits 6.3.1. Builder Trait The builder trait is internally used to determine the best strategy to build and configure IntegrationKits. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The builder trait is a platform trait : disabling it may compromise the platform functionality. 6.3.1.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait builder.[key]=[value] --trait builder.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description builder.enabled bool Can be used to enable or disable a trait. All traits share this common property. builder.verbose bool Enable verbose logging on build components that support it (e.g., OpenShift build pod). Kaniko and Buildah are not supported. builder.properties []string A list of properties to be provided to the build task 6.3.2. Container Trait The Container trait can be used to configure properties of the container where the integration will run. It also provides configuration for Services associated to the container. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The container trait is a platform trait : disabling it may compromise the platform functionality. 6.3.2.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait container.[key]=[value] --trait container.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description container.enabled bool Can be used to enable or disable a trait. All traits share this common property. container.auto bool container.request-cpu string The minimum amount of CPU required. container.request-memory string The minimum amount of memory required. container.limit-cpu string The maximum amount of CPU required. container.limit-memory string The maximum amount of memory required. container.expose bool Can be used to enable/disable exposure via kubernetes Service. container.port int To configure a different port exposed by the container (default 8080 ). container.port-name string To configure a different port name for the port exposed by the container (default http ). container.service-port int To configure under which service port the container port is to be exposed (default 80 ). container.service-port-name string To configure under which service port name the container port is to be exposed (default http ). container.name string The main container name. It's named integration by default. container.image string The main container image container.probes-enabled bool ProbesEnabled enable/disable probes on the container (default false ) 6.3.3. Camel Trait The Camel trait can be used to configure versions of Apache Camel K runtime and related libraries, it cannot be disabled. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The camel trait is a platform trait : disabling it may compromise the platform functionality. 6.3.3.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait camel.[key]=[value] --trait camel.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description camel.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.3.4. Dependencies Trait The Dependencies trait is internally used to automatically add runtime dependencies based on the integration that the user wants to run. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The dependencies trait is a platform trait : disabling it may compromise the platform functionality. 6.3.4.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait dependencies.[key]=[value] Integration.java The following configuration options are available: Property Type Description dependencies.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.3.5. Deployer Trait The deployer trait can be used to explicitly select the kind of high level resource that will deploy the integration. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The deployer trait is a platform trait : disabling it may compromise the platform functionality. 6.3.5.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait deployer.[key]=[value] --trait deployer.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description deployer.enabled bool Can be used to enable or disable a trait. All traits share this common property. deployer.kind string Allows to explicitly select the desired deployment kind between deployment , cron-job or knative-service when creating the resources for running the integration. 6.3.6. Deployment Trait The Deployment trait is responsible for generating the Kubernetes deployment that will make sure the integration will run in the cluster. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The deployment trait is a platform trait : disabling it may compromise the platform functionality. 6.3.6.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait deployment.[key]=[value] Integration.java The following configuration options are available: Property Type Description deployment.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.3.7. Environment Trait The environment trait is used internally to inject standard environment variables in the integration container, such as NAMESPACE , POD_NAME and others. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The environment trait is a platform trait : disabling it may compromise the platform functionality. 6.3.7.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait environment.[key]=[value] --trait environment.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description environment.enabled bool Can be used to enable or disable a trait. All traits share this common property. environment.container-meta bool Enables injection of NAMESPACE and POD_NAME environment variables (default true ) 6.3.8. Error Handler Trait The error-handler is a platform trait used to inject Error Handler source into the integration runtime. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The error-handler trait is a platform trait : disabling it may compromise the platform functionality. 6.3.8.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait error-handler.[key]=[value] --trait error-handler.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description error-handler.enabled bool Can be used to enable or disable a trait. All traits share this common property. error-handler.ref string The error handler ref name provided or found in application properties 6.3.9. Jvm Trait The JVM trait is used to configure the JVM that runs the integration. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The jvm trait is a platform trait : disabling it may compromise the platform functionality. 6.3.9.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait jvm.[key]=[value] --trait jvm.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description jvm.enabled bool Can be used to enable or disable a trait. All traits share this common property. jvm.debug bool Activates remote debugging, so that a debugger can be attached to the JVM, e.g., using port-forwarding jvm.debug-suspend bool Suspends the target JVM immediately before the main class is loaded jvm.print-command bool Prints the command used the start the JVM in the container logs (default true ) jvm.debug-address string Transport address at which to listen for the newly launched JVM (default *:5005 ) jvm.options []string A list of JVM options jvm.classpath string Additional JVM classpath (use Linux classpath separator) 6.3.9.2. Examples Include an additional classpath to the Integration : USD kamel run -t jvm.classpath=/path/to/my-dependency.jar:/path/to/another-dependency.jar ... 6.3.10. Kamelets Trait The kamelets trait is a platform trait used to inject Kamelets into the integration runtime. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The kamelets trait is a platform trait : disabling it may compromise the platform functionality. 6.3.10.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait kamelets.[key]=[value] --trait kamelets.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description kamelets.enabled bool Can be used to enable or disable a trait. All traits share this common property. kamelets.auto bool Automatically inject all referenced Kamelets and their default configuration (enabled by default) kamelets.list string Comma separated list of Kamelet names to load into the current integration 6.3.11. NodeAffinity Trait The NodeAffinity trait enables you to constrain the nodes that the integration pods are eligible to schedule on, through the following paths: Based on labels on the node or with inter-pod affinity and anti-affinity. Based on labels on pods that are already running on the nodes. This trait is disabled by default. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.3.11.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait affinity.[key]=[value] --trait affinity.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description affinity.enabled bool Can be used to enable or disable a trait. All traits share this common property. affinity.pod-affinity bool Always co-locates multiple replicas of the integration in the same node (default false ). affinity.pod-anti-affinity bool Never co-locates multiple replicas of the integration in the same node (default false ). affinity.node-affinity-labels []string Defines a set of nodes the integration pod(s) are eligible to be scheduled on, based on labels on the node. affinity.pod-affinity-labels []string Defines a set of pods (namely those matching the label selector, relative to the given namespace) that the integration pod(s) should be co-located with. affinity.pod-anti-affinity-labels []string Defines a set of pods (namely those matching the label selector, relative to the given namespace) that the integration pod(s) should not be co-located with. 6.3.11.2. Examples To schedule the integration pod(s) on a specific node using the built-in node label kubernetes.io/hostname : USD kamel run -t affinity.node-affinity-labels="kubernetes.io/hostname in(node-66-50.hosted.k8s.tld)" ... To schedule a single integration pod per node (using the Exists operator): USD kamel run -t affinity.pod-anti-affinity-labels="camel.apache.org/integration" ... To co-locate the integration pod(s) with other integration pod(s): USD kamel run -t affinity.pod-affinity-labels="camel.apache.org/integration in(it1, it2)" ... The *-labels options follow the requirements from Label selectors . They can be multi-valuated, then the requirements list is ANDed, e.g., to schedule a single integration pod per node AND not co-located with the Camel K operator pod(s): USD kamel run -t affinity.pod-anti-affinity-labels="camel.apache.org/integration" -t affinity.pod-anti-affinity-labels="camel.apache.org/component=operator" ... More information can be found in the official Kubernetes documentation about Assigning Pods to Nodes . 6.3.12. Openapi Trait The OpenAPI DSL trait is internally used to allow creating integrations from a OpenAPI specs. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The openapi trait is a platform trait : disabling it may compromise the platform functionality. 6.3.12.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait openapi.[key]=[value] Integration.java The following configuration options are available: Property Type Description openapi.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.3.13. Owner Trait The Owner trait ensures that all created resources belong to the integration being created and transfers annotations and labels on the integration onto these owned resources. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The owner trait is a platform trait : disabling it may compromise the platform functionality. 6.3.13.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait owner.[key]=[value] --trait owner.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description owner.enabled bool Can be used to enable or disable a trait. All traits share this common property. owner.target-annotations []string The set of annotations to be transferred owner.target-labels []string The set of labels to be transferred 6.3.14. Platform Trait The platform trait is a base trait that is used to assign an integration platform to an integration. In case the platform is missing, the trait is allowed to create a default platform. This feature is especially useful in contexts where there's no need to provide a custom configuration for the platform (e.g. on OpenShift the default settings work, since there's an embedded container image registry). This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The platform trait is a platform trait : disabling it may compromise the platform functionality. 6.3.14.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait platform.[key]=[value] --trait platform.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description platform.enabled bool Can be used to enable or disable a trait. All traits share this common property. platform.create-default bool To create a default (empty) platform when the platform is missing. platform.global bool Indicates if the platform should be created globally in the case of global operator (default true). platform.auto bool To automatically detect from the environment if a default platform can be created (it will be created on OpenShift only). 6.3.15. Quarkus Trait The Quarkus trait activates the Quarkus runtime. It's enabled by default. Note Compiling to a native executable, i.e. when using package-type=native , is only supported for kamelets, as well as YAML integrations. It also requires at least 4GiB of memory, so the Pod running the native build, that is either the operator Pod, or the build Pod (depending on the build strategy configured for the platform), must have enough memory available. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The quarkus trait is a platform trait : disabling it may compromise the platform functionality. 6.3.15.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait quarkus.[key]=[value] --trait quarkus.[key2]=[value2] integration.java The following configuration options are available: Property Type Description quarkus.enabled bool Can be used to enable or disable a trait. All traits share this common property. quarkus.package-type []github.com/apache/camel-k/pkg/trait.quarkusPackageType The Quarkus package types, either fast-jar or native (default fast-jar ). In case both fast-jar and native are specified, two IntegrationKit resources are created, with the native kit having precedence over the fast-jar one once ready. The order influences the resolution of the current kit for the integration. The kit corresponding to the first package type will be assigned to the integration in case no existing kit that matches the integration exists. 6.3.15.2. Supported Camel Components Camel K only supports the Camel components that are available as Camel Quarkus Extensions out-of-the-box. 6.3.15.3. Examples 6.3.15.3.1. Automatic Rollout Deployment to Native Integration While the compilation to native executables produces integrations that start faster and consume less memory at runtime, the build process is resources intensive, and takes a longer time than the packaging to traditional Java applications. In order to combine the best of both worlds, it's possible to configure the Quarkus trait to run both traditional and native builds in parallel when running an integration, e.g.: USD kamel run -t quarkus.package-type=fast-jar -t quarkus.package-type=native ... The integration pod will run as soon as the fast-jar build completes, and a rollout deployment to the native image will be triggered, as soon as the native build completes, with no service interruption.
[ "kamel run --trait service.enabled=false my-integration.yaml", "kamel run --trait health.[key]=[value] --trait health.[key2]=[value2] integration.java", "kamel run --trait knative.[key]=[value] --trait knative.[key2]=[value2] integration.java", "kamel run --trait knative-service.[key]=[value] --trait knative-service.[key2]=[value2] Integration.java", "kamel run --trait logging.[key]=[value] --trait logging.[key2]=[value2] Integration.java", "kamel run --trait master.[key]=[value] --trait master.[key2]=[value2] Integration.java", "kamel run --trait mount.[key]=[value] --trait mount.[key2]=[value2] integration.java", "kamel run --trait telemetry.[key]=[value] --trait telemetry.[key2]=[value2] integration.java", "kamel run -t telemetry.enable=true", "kamel run -t telemetry.enable=true -t telemetry.endpoint=http://instance-collector:4317", "kamel run -t telemetry.enable=true -t telemetry.service-name=tracer_myintegration", "kamel run -t telemetry.enable=true -t telemetry.sampler=ratio -t telemetry.sampler-ratio=0.001", "kamel run Integration.java --pod-template template.yaml --env TEST_VARIABLE=will_be_overriden --env ANOTHER_VARIABLE=Im_There", "kamel run --trait pod.[key]=[value] integration.java", "from('file:///var/log') .convertBodyTo(String.class) .setBody().simple('USD{body}: {{TEST_VARIABLE}} ') .log('USD{body}')", "containers: - name: integration env: - name: TEST_VARIABLE value: \"hello from the template\" volumeMounts: - name: var-logs mountPath: /var/log - name: sidecar image: busybox command: [ \"/bin/sh\" , \"-c\", \"while true; do echo USD(date -u) 'Content from the sidecar container' > /var/log/file.txt; sleep 1;done\" ] volumeMounts: - name: var-logs mountPath: /var/log volumes: - name: var-logs emptyDir: { }", "kamel run Integration.java --pod-template template.yaml Condition \"Ready\" is \"True\" for Integration integration [1] 2021-04-30 07:40:03,136 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:02 UTC 2021 Content from the sidecar container [1] : hello from the template [1] 2021-04-30 07:40:04,140 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:03 UTC 2021 Content from the sidecar container [1] : hello from the template [1] 2021-04-30 07:40:05,142 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:04 UTC 2021 Content from the sidecar container [1] : hello from the template", "containers: - name: integration initContainers: - name: init image: busybox command: [ \"/bin/sh\" , \"-c\", \"echo 'hello'!\" ]", "kamel run --trait prometheus.[key]=[value] --trait prometheus.[key2]=[value2] Integration.java", "kamel run --trait pdb.[key]=[value] --trait pdb.[key2]=[value2] Integration.java", "kamel run --trait pull-secret.[key]=[value] --trait pull-secret.[key2]=[value2] Integration.java", "kamel run --trait route.[key]=[value] --trait route.[key2]=[value2] integration.java", "openssl genrsa -out tls.key openssl req -new -key tls.key -out csr.csr -subj \"/CN=my-server.com\" openssl x509 -req -in csr.csr -signkey tls.key -out tls.crt create secret tls my-combined-certs --key=tls.key --cert=tls.crt", "curl -k https://platform-http-server-`oc config view --minify -o 'jsonpath={..namespace}'`.`oc get dnses/cluster -ojsonpath='{.spec.baseDomain}'`/hello?name=Camel-K", "kamel run --dev PlatformHttpServer.java -t route.tls-termination=edge -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key", "kamel run --dev PlatformHttpServer.java --resource secret:my-combined-certs@/etc/ssl/my-combined-certs -p quarkus.http.ssl.certificate.file=/etc/ssl/my-combined-certs/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/my-combined-certs/tls.key -t route.tls-termination=passthrough -t container.port=8443", "kamel run --dev PlatformHttpServer.java --resource secret:my-combined-certs@/etc/ssl/my-combined-certs -p quarkus.http.ssl.certificate.file=/etc/ssl/my-combined-certs/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/my-combined-certs/tls.key -t route.tls-termination=reencrypt -t route.tls-destination-ca-certificate-secret=my-combined-certs/tls.crt -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key -t container.port=8443", "kamel run --dev PlatformHttpServer.java --resource secret:cert-from-openshift@/etc/ssl/cert-from-openshift -p quarkus.http.ssl.certificate.file=/etc/ssl/cert-from-openshift/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/cert-from-openshift/tls.key -t route.tls-termination=reencrypt -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key -t container.port=8443", "annotate service platform-http-server service.beta.openshift.io/serving-cert-secret-name=cert-from-openshift", "kamel run PlatformHttpServer.java --dev -t route.tls-termination=edge -t route.tls-certificate=\"USD(cat tls.crt|awk 'NF {sub(/\\r/, \"\"); printf \"%s\\\\n\",USD0;}')\" -t route.tls-key=\"USD(cat tls.key|awk 'NF {sub(/\\r/, \"\"); printf \"%s\\\\n\",USD0;}')\"", "kamel run --trait service.[key]=[value] --trait service.[key2]=[value2] Integration.java", "kamel run --trait builder.[key]=[value] --trait builder.[key2]=[value2] Integration.java", "kamel run --trait container.[key]=[value] --trait container.[key2]=[value2] Integration.java", "kamel run --trait camel.[key]=[value] --trait camel.[key2]=[value2] Integration.java", "kamel run --trait dependencies.[key]=[value] Integration.java", "kamel run --trait deployer.[key]=[value] --trait deployer.[key2]=[value2] Integration.java", "kamel run --trait deployment.[key]=[value] Integration.java", "kamel run --trait environment.[key]=[value] --trait environment.[key2]=[value2] Integration.java", "kamel run --trait error-handler.[key]=[value] --trait error-handler.[key2]=[value2] Integration.java", "kamel run --trait jvm.[key]=[value] --trait jvm.[key2]=[value2] Integration.java", "kamel run -t jvm.classpath=/path/to/my-dependency.jar:/path/to/another-dependency.jar", "kamel run --trait kamelets.[key]=[value] --trait kamelets.[key2]=[value2] Integration.java", "kamel run --trait affinity.[key]=[value] --trait affinity.[key2]=[value2] Integration.java", "kamel run -t affinity.node-affinity-labels=\"kubernetes.io/hostname in(node-66-50.hosted.k8s.tld)\"", "kamel run -t affinity.pod-anti-affinity-labels=\"camel.apache.org/integration\"", "kamel run -t affinity.pod-affinity-labels=\"camel.apache.org/integration in(it1, it2)\"", "kamel run -t affinity.pod-anti-affinity-labels=\"camel.apache.org/integration\" -t affinity.pod-anti-affinity-labels=\"camel.apache.org/component=operator\"", "kamel run --trait openapi.[key]=[value] Integration.java", "kamel run --trait owner.[key]=[value] --trait owner.[key2]=[value2] Integration.java", "kamel run --trait platform.[key]=[value] --trait platform.[key2]=[value2] Integration.java", "kamel run --trait quarkus.[key]=[value] --trait quarkus.[key2]=[value2] integration.java", "kamel run -t quarkus.package-type=fast-jar -t quarkus.package-type=native" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/developing_and_managing_integrations_using_camel_k/camel-k-traits-reference
Chapter 6. Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator
Chapter 6. Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator You can install the multicluster engine Operator and deploy a hub cluster with the Agent-based OpenShift Container Platform Installer. The following procedure is partially automated and requires manual steps after the initial cluster is deployed. 6.1. Prerequisites You have read the following documentation: Cluster lifecycle with multicluster engine operator overview . Persistent storage using local volumes . Using GitOps ZTP to provision clusters at the network far edge . Preparing to install with the Agent-based Installer . About disconnected installation mirroring . You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). If you are installing in a disconnected environment, you must have a configured local mirror registry for disconnected installation mirroring. 6.2. Preparing an Agent-based cluster deployment for the multicluster engine for Kubernetes Operator while disconnected You can mirror the required OpenShift Container Platform container images, the multicluster engine Operator, and the Local Storage Operator (LSO) into your local mirror registry in a disconnected environment. Ensure that you note the local DNS hostname and port of your mirror registry. Note To mirror your OpenShift Container Platform image repository to your mirror registry, you can use either the oc adm release image or oc mirror command. In this procedure, the oc mirror command is used as an example. Procedure Create an <assets_directory> folder to contain valid install-config.yaml and agent-config.yaml files. This directory is used to store all the assets. To mirror an OpenShift Container Platform image repository, the multicluster engine, and the LSO, create a ImageSetConfiguration.yaml file with the following settings: Example ImageSetConfiguration.yaml kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - "amd64" channels: - name: stable-4.17 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8 1 Specify the maximum size, in GiB, of each file within the image set. 2 Set the back-end location to receive the image set metadata. This location can be a registry or local directory. It is required to specify storageConfig values. 3 Set the registry URL for the storage backend. 4 Set the channel that contains the OpenShift Container Platform images for the version you are installing. 5 Set the Operator catalog that contains the OpenShift Container Platform images that you are installing. 6 Specify only certain Operator packages and channels to include in the image set. Remove this field to retrieve all packages in the catalog. 7 The multicluster engine packages and channels. 8 The LSO packages and channels. Note This file is required by the oc mirror command when mirroring content. To mirror a specific OpenShift Container Platform image repository, the multicluster engine, and the LSO, run the following command: USD oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port> Update the registry and certificate in the install-config.yaml file: Example imageContentSources.yaml imageContentSources: - source: "quay.io/openshift-release-dev/ocp-release" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images" - source: "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release" - source: "registry.redhat.io/ubi9" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/ubi9" - source: "registry.redhat.io/multicluster-engine" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine" - source: "registry.redhat.io/rhel8" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/rhel8" - source: "registry.redhat.io/redhat" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/redhat" Additionally, ensure your certificate is present in the additionalTrustBundle field of the install-config.yaml . Example install-config.yaml additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE------- Important The oc mirror command creates a folder called oc-mirror-workspace with several outputs. This includes the imageContentSourcePolicy.yaml file that identifies all the mirrors you need for OpenShift Container Platform and your selected Operators. Generate the cluster manifests by running the following command: USD openshift-install agent create cluster-manifests This command updates the cluster manifests folder to include a mirror folder that contains your mirror configuration. 6.3. Preparing an Agent-based cluster deployment for the multicluster engine for Kubernetes Operator while connected Create the required manifests for the multicluster engine Operator, the Local Storage Operator (LSO), and to deploy an agent-based OpenShift Container Platform cluster as a hub cluster. Procedure Create a sub-folder named openshift in the <assets_directory> folder. This sub-folder is used to store the extra manifests that will be applied during the installation to further customize the deployed cluster. The <assets_directory> folder contains all the assets including the install-config.yaml and agent-config.yaml files. Note The installer does not validate extra manifests. For the multicluster engine, create the following manifests and save them in the <assets_directory>/openshift folder: Example mce_namespace.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: multicluster-engine Example mce_operatorgroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine Example mce_subscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: "stable-2.3" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace Note You can install a distributed unit (DU) at scale with the Red Hat Advanced Cluster Management (RHACM) using the assisted installer (AI). These distributed units must be enabled in the hub cluster. The AI service requires persistent volumes (PVs), which are manually created. For the AI service, create the following manifests and save them in the <assets_directory>/openshift folder: Example lso_namespace.yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: "true" name: openshift-local-storage Example lso_operatorgroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage Example lso_subscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace Note After creating all the manifests, your filesystem must display as follows: Example Filesystem <assets_directory> ├─ install-config.yaml ├─ agent-config.yaml └─ /openshift ├─ mce_namespace.yaml ├─ mce_operatorgroup.yaml ├─ mce_subscription.yaml ├─ lso_namespace.yaml ├─ lso_operatorgroup.yaml └─ lso_subscription.yaml Create the agent ISO image by running the following command: USD openshift-install agent create image --dir <assets_directory> When the image is ready, boot the target machine and wait for the installation to complete. To monitor the installation, run the following command: USD openshift-install agent wait-for install-complete --dir <assets_directory> Note To configure a fully functional hub cluster, you must create the following manifests and manually apply them by running the command USD oc apply -f <manifest-name> . The order of the manifest creation is important and where required, the waiting condition is displayed. For the PVs that are required by the AI service, create the following manifests: apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem Use the following command to wait for the availability of the PVs, before applying the subsequent manifests: USD oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m Note Create a manifest for a multicluster engine instance. Example MultiClusterEngine.yaml apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {} Create a manifest to enable the AI service. Example agentserviceconfig.yaml apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Create a manifest to deploy subsequently spoke clusters. Example clusterimageset.yaml apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: "4.17" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.17.0-x86_64 Create a manifest to import the agent installed cluster (that hosts the multicluster engine and the Assisted Service) as the hub cluster. Example autoimport.yaml apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: "true" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true Wait for the managed cluster to be created. USD oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m Verification To confirm that the managed cluster installation is successful, run the following command: USD oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m Additional resources The Local Storage Operator
[ "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - \"amd64\" channels: - name: stable-4.17 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8", "oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port>", "imageContentSources: - source: \"quay.io/openshift-release-dev/ocp-release\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images\" - source: \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release\" - source: \"registry.redhat.io/ubi9\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/ubi9\" - source: \"registry.redhat.io/multicluster-engine\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine\" - source: \"registry.redhat.io/rhel8\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/rhel8\" - source: \"registry.redhat.io/redhat\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/redhat\"", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE-------", "openshift-install agent create cluster-manifests", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: multicluster-engine", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: \"stable-2.3\" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: \"true\" name: openshift-local-storage", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "<assets_directory> ├─ install-config.yaml ├─ agent-config.yaml └─ /openshift ├─ mce_namespace.yaml ├─ mce_operatorgroup.yaml ├─ mce_subscription.yaml ├─ lso_namespace.yaml ├─ lso_operatorgroup.yaml └─ lso_subscription.yaml", "openshift-install agent create image --dir <assets_directory>", "openshift-install agent wait-for install-complete --dir <assets_directory>", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem", "oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m", "The `devicePath` is an example and may vary depending on the actual hardware configuration used.", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: \"4.17\" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.17.0-x86_64", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: \"true\" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true", "oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m", "oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-an-agent-based-installed-cluster-for-the-multicluster-engine-for-kubernetes
Upgrading Guide
Upgrading Guide Red Hat Single Sign-On 7.6 For Use with Red Hat Single Sign-On 7.6 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/upgrading_guide/index
Preface
Preface Red Hat OpenShift Data Foundation 4.9 supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Z clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note Only internal OpenShift Data Foundation clusters are supported on IBM Z. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process for your environment: Internal Attached Devices mode Deploy using local storage devices
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_z_infrastructure/preface-ibm-z
2.14. Additional Resources
2.14. Additional Resources The ultimate documentation for cgroup commands is available on the manual pages provided with the libcgroup package. The section numbers are specified in the list of man pages below. The libcgroup Man Pages man 1 cgclassify - the cgclassify command is used to move running tasks to one or more cgroups. man 1 cgclear - the cgclear command is used to delete all cgroups in a hierarchy. man 5 cgconfig.conf - cgroups are defined in the cgconfig.conf file. man 8 cgconfigparser - the cgconfigparser command parses the cgconfig.conf file and mounts hierarchies. man 1 cgcreate - the cgcreate command creates new cgroups in hierarchies. man 1 cgdelete - the cgdelete command removes specified cgroups. man 1 cgexec - the cgexec command runs tasks in specified cgroups. man 1 cgget - the cgget command displays cgroup parameters. man 1 cgsnapshot - the cgsnapshot command generates a configuration file from existing subsystems. man 5 cgred.conf - cgred.conf is the configuration file for the cgred service. man 5 cgrules.conf - cgrules.conf contains the rules used for determining when tasks belong to certain cgroups. man 8 cgrulesengd - the cgrulesengd service distributes tasks to cgroups. man 1 cgset - the cgset command sets parameters for a cgroup. man 1 lscgroup - the lscgroup command lists the cgroups in a hierarchy. man 1 lssubsys - the lssubsys command lists the hierarchies containing the specified subsystems.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-using_control_groups-additional_resources
probe::nfs.aop.writepages
probe::nfs.aop.writepages Name probe::nfs.aop.writepages - NFS client writing several dirty pages to the NFS server Synopsis nfs.aop.writepages Values for_reclaim a flag of writeback_control, indicates if it's invoked from the page allocator wpages write size (in pages) nr_to_write number of pages attempted to be written in this execution for_kupdate a flag of writeback_control, indicates if it's a kupdate writeback ino inode number size number of pages attempted to be written in this execution wsize write size dev device identifier Description The priority of wb is decided by the flags for_reclaim and for_kupdate .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-aop-writepages
Chapter 17. Jakarta Security
Chapter 17. Jakarta Security 17.1. About Jakarta Security Jakarta Security defines plug-in interfaces for authentication and identity stores, and a new injectable-type SecurityContext interface that provides an access point for programmatic security. For details about the specifications, see Jakarta Security Specification . 17.2. Configure Jakarta Security Using Elytron Enabling Jakarta Security Using the elytron Subsystem The SecurityContext interface defined in Jakarta Security uses the Jakarta Authorization policy provider to access the current authenticated identity. To enable your deployments to use the SecurityContext interface, you must configure the elytron subsystem to manage the Jakarta Authorization configuration and define a default Jakarta Authorization policy provider. Disable Jakarta Authorization in the legacy security subsystem. Skip this step if Jakarta Authorization is already configured to be managed by Elytron. Define a Jakarta Authorization policy provider in the etlyron subsystem and reload the server. Enabling Jakarta Security for Web Applications To enable Jakarta Security for a web application, the web application needs to be associated with either an Elytron http-authentication-factory or a security-domain . This installs the Elytron security handlers and activates the Elytron security framework for the deployment. The minimal steps to enable Jakarta Security are: Leave the default-security-domain attribute on the undertow subsystem undefined so that it defaults to other . Add an application-security-domain mapping from other to an Elytron security domain: When integrated-jaspi is set to false , ad-hoc identities are created dynamically. Jakarta Security is built on Jakarta Authentication. For information about configuring Jakarta Authentication, see Configure Jakarta Authentication Security Using Elytron .
[ "/subsystem=security:write-attribute(name=initialize-jacc, value=false)", "/subsystem=elytron/policy=jacc:add(jacc-policy={}) reload", "/subsystem=undertow/application-security-domain=other:add(security-domain=ApplicationDomain, integrated-jaspi=false)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/jakarta_security
9.4. NFS Client Configuration Files
9.4. NFS Client Configuration Files NFS shares are mounted on the client side using the mount command. The format of the command is as follows: Replace <nfs-type> with either nfs for NFSv2 or NFSv3 servers, or nfs4 for NFSv4 servers. Replace <options> with a comma separated list of options for the NFS file system (refer to Section 9.4.3, "Common NFS Mount Options" for details). Replace <host> with the remote host, </remote/export> with the remote directory being mounted, and </local/directory> with the local directory where the remote file system is to be mounted. Refer to the mount man page for more details. If accessing an NFS share by manually issuing the mount command, the file system must be remounted manually after the system is rebooted. Red Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the /etc/fstab file or the autofs service. 9.4.1. /etc/fstab The /etc/fstab file is referenced by the netfs service at boot time, so lines referencing NFS shares have the same effect as manually typing the mount command during the boot process. A sample /etc/fstab line to mount an NFS export looks like the following example: Replace <server> with the hostname, IP address, or fully qualified domain name of the server exporting the file system. Replace </remote/export> with the path to the exported directory. Replace </local/directory> with the local file system on which the exported directory is mounted. This mount point must exist before /etc/fstab is read or the mount fails. Replace <nfs-type> with either nfs for NFSv2 or NFSv3 servers, or nfs4 for NFSv4 servers. Replace <options> with a comma separated list of options for the NFS file system (refer to Section 9.4.3, "Common NFS Mount Options" for details). Refer to the fstab man page for additional information.
[ "mount -t <nfs-type> -o <options> <host> : </remote/export> </local/directory>", "<server> : </remote/export> </local/directory> <nfs-type> <options> 0 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-nfs-client-config
Chapter 3. Understanding persistent storage
Chapter 3. Understanding persistent storage 3.1. Persistent storage overview Managing storage is a distinct problem from managing compute resources. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. PVCs are specific to a project, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single project; they can be shared across the entire OpenShift Container Platform cluster and claimed from any project. After a PV is bound to a PVC, that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace, that of the binding project. PVs are defined by a PersistentVolume API object, which represents a piece of existing storage in the cluster that was either statically provisioned by the cluster administrator or dynamically provisioned using a StorageClass object. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes but have a lifecycle that is independent of any individual pod that uses the PV. PV objects capture the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. Important High availability of storage in the infrastructure is left to the underlying storage provider. PVCs are defined by a PersistentVolumeClaim API object, which represents a request for storage by a developer. It is similar to a pod in that pods consume node resources and PVCs consume PV resources. For example, pods can request specific levels of resources, such as CPU and memory, while PVCs can request specific storage capacity and access modes. For example, they can be mounted once read-write or many times read-only. 3.2. Lifecycle of a volume and claim PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs have the following lifecycle. 3.2.1. Provision storage In response to requests from a developer defined in a PVC, a cluster administrator configures one or more dynamic provisioners that provision storage and a matching PV. Alternatively, a cluster administrator can create a number of PVs in advance that carry the details of the real storage that is available for use. PVs exist in the API and are available for use. 3.2.2. Bind claims When you create a PVC, you request a specific amount of storage, specify the required access mode, and create a storage class to describe and classify the storage. The control loop in the master watches for new PVCs and binds the new PVC to an appropriate PV. If an appropriate PV does not exist, a provisioner for the storage class creates one. The size of all PVs might exceed your PVC size. This is especially true with manually provisioned PVs. To minimize the excess, OpenShift Container Platform binds to the smallest PV that matches all other criteria. Claims remain unbound indefinitely if a matching volume does not exist or can not be created with any available provisioner servicing a storage class. Claims are bound as matching volumes become available. For example, a cluster with many manually provisioned 50Gi volumes would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. 3.2.3. Use pods and claimed PVs Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, you must specify which mode applies when you use the claim as a volume in a pod. Once you have a claim and that claim is bound, the bound PV belongs to you for as long as you need it. You can schedule pods and access claimed PVs by including persistentVolumeClaim in the pod's volumes block. Note If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 3.2.4. Storage Object in Use Protection The Storage Object in Use Protection feature ensures that PVCs in active use by a pod and PVs that are bound to PVCs are not removed from the system, as this can result in data loss. Storage Object in Use Protection is enabled by default. Note A PVC is in active use by a pod when a Pod object exists that uses the PVC. If a user deletes a PVC that is in active use by a pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any pods. Also, if a cluster admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. 3.2.5. Release a persistent volume When you are finished with a volume, you can delete the PVC object from the API, which allows reclamation of the resource. The volume is considered released when the claim is deleted, but it is not yet available for another claim. The claimant's data remains on the volume and must be handled according to policy. 3.2.6. Reclaim policy for persistent volumes The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Retain reclaim policy allows manual reclamation of the resource for those volume plugins that support it. Recycle reclaim policy recycles the volume back into the pool of unbound persistent volumes once it is released from its claim. Important The Recycle reclaim policy is deprecated in OpenShift Container Platform 4. Dynamic provisioning is recommended for equivalent and better functionality. Delete reclaim policy deletes both the PersistentVolume object from OpenShift Container Platform and the associated storage asset in external infrastructure, such as Amazon Elastic Block Store (Amazon EBS) or VMware vSphere. Note Dynamically provisioned volumes are always deleted. 3.2.7. Reclaiming a persistent volume manually When a persistent volume claim (PVC) is deleted, the persistent volume (PV) still exists and is considered "released". However, the PV is not yet available for another claim because the data of the claimant remains on the volume. Procedure To manually reclaim the PV as a cluster administrator: Delete the PV. USD oc delete pv <pv-name> The associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume, still exists after the PV is deleted. Clean up the data on the associated storage asset. Delete the associated storage asset. Alternately, to reuse the same storage asset, create a new PV with the storage asset definition. The reclaimed PV is now available for use by another PVC. 3.2.8. Changing the reclaim policy of a persistent volume To change the reclaim policy of a persistent volume: List the persistent volumes in your cluster: USD oc get pv Example output NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s Choose one of your persistent volumes and change its reclaim policy: USD oc patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' Verify that your chosen persistent volume has the right policy: USD oc get pv Example output NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s In the preceding output, the volume bound to claim default/claim3 now has a Retain reclaim policy. The volume will not be automatically deleted when a user deletes claim default/claim3 . 3.3. Persistent volumes Each PV contains a spec and status , which is the specification and status of the volume, for example: PersistentVolume object definition example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 ... status: ... 1 Name of the persistent volume. 2 The amount of storage available to the volume. 3 The access mode, defining the read-write and mount permissions. 4 The reclaim policy, indicating how the resource should be handled once it is released. 3.3.1. Types of PVs OpenShift Container Platform supports the following persistent volume plugins: AliCloud Disk AWS Elastic Block Store (EBS) AWS Elastic File Store (EFS) Azure Disk Azure File Cinder Fibre Channel GCP Persistent Disk GCP Filestore IBM Power Virtual Server Block IBM(R) VPC Block HostPath iSCSI Local volume NFS OpenStack Manila Red Hat OpenShift Data Foundation VMware vSphere 3.3.2. Capacity Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the capacity attribute of the PV. Currently, storage capacity is the only resource that can be set or requested. Future attributes may include IOPS, throughput, and so on. 3.3.3. Access modes A persistent volume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim's access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO. Direct matches are always attempted first. The volume's modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another. All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches. Important Volume access modes describe volume capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource. Errors in the provider show up at runtime as mount errors. For example, NFS offers ReadWriteOnce access mode. If you want to use the volume's ROX capability, mark the claims as ReadOnlyMany . iSCSI and Fibre Channel volumes do not currently have any fencing mechanisms. You must ensure the volumes are only used by one node at a time. In certain situations, such as draining a node, the volumes can be used simultaneously by two nodes. Before draining the node, delete the pods that use the volumes. The following table lists the access modes: Table 3.1. Access modes Access Mode CLI abbreviation Description ReadWriteOnce RWO The volume can be mounted as read-write by a single node. ReadWriteOncePod [1] RWOP The volume can be mounted as read-write by a single pod on a single node. ReadOnlyMany ROX The volume can be mounted as read-only by many nodes. ReadWriteMany RWX The volume can be mounted as read-write by many nodes. ReadWriteOncePod access mode for persistent volumes is a Technology Preview feature. Important ReadWriteOncePod access mode for persistent volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Table 3.2. Supported access modes for persistent volumes Volume plugin ReadWriteOnce [1] ReadWriteOncePod [2] ReadOnlyMany ReadWriteMany AliCloud Disk ✅ ✅ - - AWS EBS [3] ✅ ✅ - - AWS EFS ✅ ✅ ✅ ✅ Azure File ✅ ✅ ✅ ✅ Azure Disk ✅ ✅ - - Cinder ✅ ✅ - - Fibre Channel ✅ ✅ ✅ ✅ [4] GCP Persistent Disk ✅ ✅ - - GCP Filestore ✅ ✅ ✅ ✅ HostPath ✅ ✅ - - IBM Power Virtual Server Disk ✅ ✅ ✅ ✅ IBM(R) VPC Disk ✅ ✅ - - iSCSI ✅ ✅ ✅ ✅ [4] Local volume ✅ ✅ - - LVM Storage ✅ ✅ - - NFS ✅ ✅ ✅ ✅ OpenStack Manila - ✅ - ✅ Red Hat OpenShift Data Foundation ✅ ✅ - ✅ VMware vSphere ✅ ✅ - ✅ [5] ReadWriteOnce (RWO) volumes cannot be mounted on multiple nodes. If a node fails, the system does not allow the attached RWO volume to be mounted on a new node because it is already assigned to the failed node. If you encounter a multi-attach error message as a result, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached. ReadWriteOncePod is a Technology Preview feature. Use a recreate deployment strategy for pods that rely on AWS EBS. Only raw block volumes support the ReadWriteMany (RWX) access mode for Fibre Channel and iSCSI. For more information, see "Block volume support". If the underlying vSphere environment supports the vSAN file service, then the vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged. For more information, see "Using Container Storage Interface" "VMware vSphere CSI Driver Operator". 3.3.4. Phase Volumes can be found in one of the following phases: Table 3.3. Volume phases Phase Description Available A free resource not yet bound to a claim. Bound The volume is bound to a claim. Released The claim was deleted, but the resource is not yet reclaimed by the cluster. Failed The volume has failed its automatic reclamation. You can view the name of the PVC that is bound to the PV by running the following command: USD oc get pv <pv-claim> 3.3.4.1. Mount options You can specify mount options while mounting a PV by using the attribute mountOptions . For example: Mount options example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default 1 Specified mount options are used while mounting the PV to the disk. The following PV types support mount options: AWS Elastic Block Store (EBS) Azure Disk Azure File Cinder GCE Persistent Disk iSCSI Local volume NFS Red Hat OpenShift Data Foundation (Ceph RBD only) VMware vSphere Note Fibre Channel and HostPath PVs do not support mount options. Additional resources ReadWriteMany vSphere volume support 3.4. Persistent volume claims Each PersistentVolumeClaim object contains a spec and status , which is the specification and status of the persistent volume claim (PVC), for example: PersistentVolumeClaim object definition example kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status: ... 1 Name of the PVC. 2 The access mode, defining the read-write and mount permissions. 3 The amount of storage available to the PVC. 4 Name of the StorageClass required by the claim. 3.4.1. Storage classes Claims can optionally request a specific storage class by specifying the storage class's name in the storageClassName attribute. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC. The cluster administrator can configure dynamic provisioners to service one or more storage classes. The cluster administrator can create a PV on demand that matches the specifications in the PVC. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The cluster administrator can also set a default storage class for all PVCs. When a default storage class is configured, the PVC must explicitly ask for StorageClass or storageClassName annotations set to "" to be bound to a PV without a storage class. Note If more than one storage class is marked as default, a PVC can only be created if the storageClassName is explicitly specified. Therefore, only one storage class should be set as the default. 3.4.2. Access modes Claims use the same conventions as volumes when requesting storage with specific access modes. 3.4.3. Resources Claims, such as pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to volumes and claims. 3.4.4. Claims as volumes Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod's namespace and uses it to get the PersistentVolume backing the claim. The volume is mounted to the host and into the pod, for example: Mount volume to the host and into the pod example kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3 1 Path to mount the volume inside the pod. 2 Name of the volume to mount. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 Name of the PVC, that exists in the same namespace, to use. 3.5. Block volume support OpenShift Container Platform can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PVC specification. Important Pods using raw block volumes must be configured to allow privileged containers. The following table displays which volume plugins support block volumes. Table 3.4. Block volume support Volume Plugin Manually provisioned Dynamically provisioned Fully supported Amazon Elastic Block Store (Amazon EBS) ✅ ✅ ✅ Amazon Elastic File Storage (Amazon EFS) AliCloud Disk ✅ ✅ ✅ Azure Disk ✅ ✅ ✅ Azure File Cinder ✅ ✅ ✅ Fibre Channel ✅ ✅ GCP ✅ ✅ ✅ HostPath IBM VPC Disk ✅ ✅ ✅ iSCSI ✅ ✅ Local volume ✅ ✅ LVM Storage ✅ ✅ ✅ NFS Red Hat OpenShift Data Foundation ✅ ✅ ✅ VMware vSphere ✅ ✅ ✅ Important Using any of the block volumes that can be provisioned manually, but are not provided as fully supported, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.5.1. Block volume examples PV example apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: ["50060e801049cfd1"] lun: 0 readOnly: false 1 volumeMode must be set to Block to indicate that this PV is a raw block volume. PVC example apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi 1 volumeMode must be set to Block to indicate that a raw block PVC is requested. Pod specification example apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: [ "tail -f /dev/null" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3 1 volumeDevices , instead of volumeMounts , is used for block devices. Only PersistentVolumeClaim sources can be used with raw block volumes. 2 devicePath , instead of mountPath , represents the path to the physical device where the raw block is mapped to the system. 3 The volume source must be of type persistentVolumeClaim and must match the name of the PVC as expected. Table 3.5. Accepted values for volumeMode Value Default Filesystem Yes Block No Table 3.6. Binding scenarios for block volumes PV volumeMode PVC volumeMode Binding result Filesystem Filesystem Bind Unspecified Unspecified Bind Filesystem Unspecified Bind Unspecified Filesystem Bind Block Block Bind Unspecified Block No Bind Block Unspecified No Bind Filesystem Block No Bind Block Filesystem No Bind Important Unspecified values result in the default value of Filesystem . 3.6. Using fsGroup to reduce pod timeouts If a storage volume contains many files (~1,000,000 or greater), you may experience pod timeouts. This can occur because, by default, OpenShift Container Platform recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a pod's securityContext when that volume is mounted. For large volumes, checking and changing ownership and permissions can be time consuming, slowing pod startup. You can use the fsGroupChangePolicy field inside a securityContext to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume. fsGroupChangePolicy defines behavior for changing ownership and permission of the volume before being exposed inside a pod. This field only applies to volume types that support fsGroup -controlled ownership and permissions. This field has two possible values: OnRootMismatch : Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This can help shorten the time it takes to change ownership and permission of a volume to reduce pod timeouts. Always : Always change permission and ownership of the volume when a volume is mounted. fsGroupChangePolicy example securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: "OnRootMismatch" 1 ... 1 OnRootMismatch specifies skipping recursive permission change, thus helping to avoid pod timeout problems. Note The fsGroupChangePolicyfield has no effect on ephemeral volume types, such as secret, configMap, and emptydir.
[ "oc delete pv <pv-name>", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:", "oc get pv <pv-claim>", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3", "apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi", "apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3", "securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage/understanding-persistent-storage
Chapter 4. Installing a cluster on Nutanix in a restricted network
Chapter 4. Installing a cluster on Nutanix in a restricted network In OpenShift Container Platform 4.17, you can install a cluster on Nutanix infrastructure in a restricted network by creating an internal mirror of the installation release content. 4.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL/TLS certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. You have a container image registry, such as Red Hat Quay. If you do not already have a registry, you can create a mirror registry using mirror registry for Red Hat OpenShift . You have used the oc-mirror OpenShift CLI (oc) plugin to mirror all of the required OpenShift Container Platform content and other images, including the Nutanix CSI Operator, to your mirror registry. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. 4.2. About installations in restricted networks In OpenShift Container Platform 4.17, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 4.5. Downloading the RHCOS cluster image Prism Central requires access to the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. You can use the installation program to locate and download the RHCOS image and make it available through an internal HTTP server or Nutanix Objects. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the Nutanix image, and click the link to download it. Example output "nutanix": { "release": "411.86.202210041459-0", "formats": { "qcow2": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "sha256": "42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b" Make the image available through an internal HTTP server or Nutanix Objects. Note the location of the downloaded image. You update the platform section in the installation configuration file ( install-config.yaml ) with the image's location before deploying the cluster. Snippet of an install-config.yaml file that specifies the RHCOS image platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image you download. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. In the install-config.yaml file, set the value of platform.nutanix.clusterOSImage to the image location or name. For example: platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on {platform}". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 4.6.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 12 13 16 17 18 19 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 14 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 15 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server or Nutanix Objects and pointing the installation program to the image. 21 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. 4.6.2. Configuring failure domains Failure domains improve the fault tolerance of an OpenShift Container Platform cluster by distributing control plane and compute machines across multiple Nutanix Prism Elements (clusters). Tip It is recommended that you configure three failure domains to ensure high-availability. Prerequisites You have an installation configuration file ( install-config.yaml ). Procedure Edit the install-config.yaml file and add the following stanza to configure the first failure domain: apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid> # ... where: <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <prism_element_name> Optional. Specifies the name of the Prism Element. <prism_element_uuid > Specifies the UUID of the Prism Element. <network_uuid > Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. Only one subnet per failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. As required, configure additional failure domains. To distribute control plane and compute machines across the failure domains, do one of the following: If compute and control plane machines can share the same set of failure domains, add the failure domain names under the cluster's default machine configuration. Example of control plane and compute machines sharing a set of failure domains apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... If compute and control plane machines must use different failure domains, add the failure domain names under the respective machine pools. Example of control plane and compute machines using different failure domains apiVersion: v1 baseDomain: example.com compute: # ... controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 # ... Save the file. 4.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.8. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 4.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.10. Post installation Complete the following steps to complete the configuration of your cluster. 4.10.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.10.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 4.10.3. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 4.12. Additional resources About remote health monitoring 4.13. steps If necessary, see Opt out of remote health reporting If necessary, see Registering your disconnected cluster Customize your cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install coreos print-stream-json", "\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"", "platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "./openshift-install create install-config --dir <installation_directory> 1", "platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3", "apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1", "openshift-install create manifests --dir <installation_directory> 1", "cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests", "ls ./<installation_directory>/manifests", "cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc apply -f ./oc-mirror-workspace/results-<id>/", "oc get imagecontentsourcepolicy", "oc get catalogsource --all-namespaces" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_nutanix/installing-restricted-networks-nutanix-installer-provisioned
3.2. Guest Security Recommended Practices
3.2. Guest Security Recommended Practices All of the recommended practices for securing a Red Hat Enterprise Linux system documented in the Red Hat Enterprise Linux Security Guide apply to conventional, non-virtualized systems as well as systems installed as a virtualized guest. However, there are a few security practices which are of critical importance when running guests in a virtualized environment: With all management of the guest likely taking place remotely, ensure that the management of the system takes place only over secured network channels. Tools such as SSH and network protocols such as TLS or SSL provide both authentication and data encryption to ensure that only approved administrators can manage the system remotely. Some virtualization technologies use special guest agents or drivers to enable some virtualization specific features. Ensure that these agents and applications are secured using the standard Red Hat Enterprise Linux security features, such as SELinux. In virtualized environments there is a greater risk of sensitive data being accessed outside the protection boundaries of the guest system. Protect stored sensitive data using encryption tools such as dm-crypt and GnuPG ; although special care needs to be taken to ensure the confidentiality of the encryption keys.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_security_guide/sect-virtualization_security_guide-guest_security-guest_security_recommended_practices
Chapter 4. Installing the Migration Toolkit for Containers in a restricted network environment
Chapter 4. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the legacy Migration Toolkit for Containers Operator on an OpenShift Container Platform 4.2 to 4.5 source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.11 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . Install the Migration Toolkit for Containers Operator on the source cluster: OpenShift Container Platform 4.6 or later: Install the Migration Toolkit for Containers Operator by using Operator Lifecycle Manager. OpenShift Container Platform 4.2 to 4.5: Install the legacy Migration Toolkit for Containers Operator from the command line interface. Configure object storage to use as a replication repository. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . To uninstall MTC, see Uninstalling MTC and deleting resources . 4.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.8. MTC 1.8 only supports migrations from OpenShift Container Platform 4.9 and later. Table 4.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.8 OpenShift Container Platform 4.9 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 4.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.11 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.11 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 4.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.11. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your OpenShift Container Platform source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 4.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.11, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 4.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 4.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 4.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 4.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 4.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 4.4.2.1. NetworkPolicy configuration 4.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 4.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 4.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 4.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 4.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 4.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 4.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 4.5. Running Rsync as either root or non-root Important This section applies only when you are working with the OpenShift API, not the web console. OpenShift environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 4.5.1. Manually overriding default non-root operation for data trannsfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges prior to migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 4.5.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 4.5.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 4.6. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 4.6.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 4.6.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Data Foundation. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. 4.6.3. Additional resources Disconnected environment in the Red Hat OpenShift Data Foundation documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 4.7. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migration_toolkit_for_containers/installing-mtc-restricted
Appendix B. Custom Network Properties
Appendix B. Custom Network Properties B.1. Explanation of bridge_opts Parameters Table B.1. bridge_opts parameters Parameter Description forward_delay Sets the time, in deciseconds, a bridge will spend in the listening and learning states. If no switching loop is discovered in this time, the bridge will enter forwarding state. This allows time to inspect the traffic and layout of the network before normal network operation. group_addr To send a general query, set this value to zero. To send a group-specific and group-and-source-specific queries, set this value to a 6-byte MAC address, not an IP address. Allowed values are 01:80:C2:00:00:0x except 01:80:C2:00:00:01 , 01:80:C2:00:00:02 and 01:80:C2:00:00:03 . group_fwd_mask Enables bridge to forward link local group addresses. Changing this value from the default will allow non-standard bridging behavior. hash_max The maximum amount of buckets in the hash table. This takes effect immediately and cannot be set to a value less than the current number of multicast group entries. Value must be a power of two. hello_time Sets the time interval, in deciseconds, between sending 'hello' messages, announcing bridge position in the network topology. Applies only if this bridge is the Spanning Tree root bridge. max_age Sets the maximum time, in deciseconds, to receive a 'hello' message from another root bridge before that bridge is considered dead and takeover begins. multicast_last_member_count Sets the number of 'last member' queries sent to the multicast group after receiving a 'leave group' message from a host. multicast_last_member_interval Sets the time, in deciseconds, between 'last member' queries. multicast_membership_interval Sets the time, in deciseconds, that a bridge will wait to hear from a member of a multicast group before it stops sending multicast traffic to the host. multicast_querier Sets whether the bridge actively runs a multicast querier or not. When a bridge receives a 'multicast host membership' query from another network host, that host is tracked based on the time that the query was received plus the multicast query interval time. If the bridge later attempts to forward traffic for that multicast membership, or is communicating with a querying multicast router, this timer confirms the validity of the querier. If valid, the multicast traffic is delivered via the bridge's existing multicast membership table; if no longer valid, the traffic is sent via all bridge ports.Broadcast domains with, or expecting, multicast memberships should run at least one multicast querier for improved performance. multicast_querier_interval Sets the maximum time, in deciseconds, between last 'multicast host membership' query received from a host to ensure it is still valid. multicast_query_use_ifaddr Boolean. Defaults to '0', in which case the querier uses 0.0.0.0 as source address for IPv4 messages. Changing this sets the bridge IP as the source address. multicast_query_interval Sets the time, in deciseconds, between query messages sent by the bridge to ensure validity of multicast memberships. At this time, or if the bridge is asked to send a multicast query for that membership, the bridge checks its own multicast querier state based on the time that a check was requested plus multicast_query_interval. If a multicast query for this membership has been sent within the last multicast_query_interval, it is not sent again. multicast_query_response_interval Length of time, in deciseconds, a host is allowed to respond to a query once it has been sent.Must be less than or equal to the value of the multicast_query_interval. multicast_router Allows you to enable or disable ports as having multicast routers attached. A port with one or more multicast routers will receive all multicast traffic. A value of 0 disables completely, a value of 1 enables the system to automatically detect the presence of routers based on queries, and a value of 2 enables ports to always receive all multicast traffic. multicast_snooping Toggles whether snooping is enabled or disabled. Snooping allows the bridge to listen to the network traffic between routers and hosts to maintain a map to filter multicast traffic to the appropriate links.This option allows the user to re-enable snooping if it was automatically disabled due to hash collisions, however snooping will not be re-enabled if the hash collision has not been resolved. multicast_startup_query_count Sets the number of queries sent out at startup to determine membership information. multicast_startup_query_interval Sets the time, in deciseconds, between queries sent out at startup to determine membership information.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/appe-Custom_Network_Properties
Appendix C. S3 common response status codes
Appendix C. S3 common response status codes The following table lists the valid common HTTP response status and its corresponding code. Table C.1. Response Status HTTP Status Response Code 100 Continue 200 Success 201 Created 202 Accepted 204 NoContent 206 Partial content 304 NotModified 400 InvalidArgument 400 InvalidDigest 400 BadDigest 400 InvalidBucketName 400 InvalidObjectName 400 UnresolvableGrantByEmailAddress 400 InvalidPart 400 InvalidPartOrder 400 RequestTimeout 400 EntityTooLarge 403 AccessDenied 403 UserSuspended 403 RequestTimeTooSkewed 404 NoSuchKey 404 NoSuchBucket 404 NoSuchUpload 405 MethodNotAllowed 408 RequestTimeout 409 BucketAlreadyExists 409 BucketNotEmpty 411 MissingContentLength 412 PreconditionFailed 416 InvalidRange 422 UnprocessableEntity 500 InternalError
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/developer_guide/s3-common-response-status-codes_dev
Chapter 5. ValidatingAdmissionPolicy [admissionregistration.k8s.io/v1]
Chapter 5. ValidatingAdmissionPolicy [admissionregistration.k8s.io/v1] Description ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . spec object ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy. status object ValidatingAdmissionPolicyStatus represents the status of an admission validation policy. 5.1.1. .spec Description ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy. Type object Property Type Description auditAnnotations array auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request. validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is required. auditAnnotations[] object AuditAnnotation describes how to produce an audit annotation for an API request. failurePolicy string failurePolicy defines how to handle failures for the admission policy. Failures can occur from CEL expression parse errors, type check errors, runtime errors and invalid or mis-configured policy definitions or bindings. A policy is invalid if spec.paramKind refers to a non-existent Kind. A binding is invalid if spec.paramRef.name refers to a non-existent resource. failurePolicy does not define how validations that evaluate to false are handled. When failurePolicy is set to Fail, ValidatingAdmissionPolicyBinding validationActions define how failures are enforced. Allowed values are Ignore or Fail. Defaults to Fail. Possible enum values: - "Fail" means that an error calling the webhook causes the admission to fail. - "Ignore" means that an error calling the webhook is ignored. matchConditions array MatchConditions is a list of conditions that must be met for a request to be validated. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. If a parameter object is provided, it can be accessed via the params handle in the same manner as validation expressions. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the policy is skipped. 2. If ALL matchConditions evaluate to TRUE, the policy is evaluated. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the policy is skipped matchConditions[] object MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. matchConstraints object MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria. The exclude rules take precedence over include rules (if a resource matches both, it is excluded) paramKind object ParamKind is a tuple of Group Kind and Version. validations array Validations contain CEL expressions which is used to apply the validation. Validations and AuditAnnotations may not both be empty; a minimum of one Validations or AuditAnnotations is required. validations[] object Validation specifies the CEL expression which is used to apply the validation. variables array Variables contain definitions of variables that can be used in composition of other expressions. Each variable is defined as a named CEL expression. The variables defined here will be available under variables in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy. The expression of a variable can refer to other variables defined earlier in the list but not those after. Thus, Variables must be sorted by the order of first appearance and acyclic. variables[] object Variable is the definition of a variable that is used for composition. A variable is defined as a named expression. 5.1.2. .spec.auditAnnotations Description auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request. validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is required. Type array 5.1.3. .spec.auditAnnotations[] Description AuditAnnotation describes how to produce an audit annotation for an API request. Type object Required key valueExpression Property Type Description key string key specifies the audit annotation key. The audit annotation keys of a ValidatingAdmissionPolicy must be unique. The key must be a qualified name ([A-Za-z0-9][-A-Za-z0-9_.]*) no more than 63 bytes in length. The key is combined with the resource name of the ValidatingAdmissionPolicy to construct an audit annotation key: "{ValidatingAdmissionPolicy name}/{key}". If an admission webhook uses the same resource name as this ValidatingAdmissionPolicy and the same audit annotation key, the annotation key will be identical. In this case, the first annotation written with the key will be included in the audit event and all subsequent annotations with the same key will be discarded. Required. valueExpression string valueExpression represents the expression which is evaluated by CEL to produce an audit annotation value. The expression must evaluate to either a string or null value. If the expression evaluates to a string, the audit annotation is included with the string value. If the expression evaluates to null or empty string the audit annotation will be omitted. The valueExpression may be no longer than 5kb in length. If the result of the valueExpression is more than 10kb in length, it will be truncated to 10kb. If multiple ValidatingAdmissionPolicyBinding resources match an API request, then the valueExpression will be evaluated for each binding. All unique values produced by the valueExpressions will be joined together in a comma-separated list. Required. 5.1.4. .spec.matchConditions Description MatchConditions is a list of conditions that must be met for a request to be validated. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. If a parameter object is provided, it can be accessed via the params handle in the same manner as validation expressions. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the policy is skipped. 2. If ALL matchConditions evaluate to TRUE, the policy is evaluated. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the policy is skipped Type array 5.1.5. .spec.matchConditions[] Description MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. Type object Required name expression Property Type Description expression string Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables: 'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(/pkg/apis/admission/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/ Required. name string Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', ' ' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9 .]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName') Required. 5.1.6. .spec.matchConstraints Description MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria. The exclude rules take precedence over include rules (if a resource matches both, it is excluded) Type object Property Type Description excludeResourceRules array ExcludeResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy should not care about. The exclude rules take precedence over include rules (if a resource matches both, it is excluded) excludeResourceRules[] object NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. matchPolicy string matchPolicy defines how the "MatchResources" list is used to match incoming requests. Allowed values are "Exact" or "Equivalent". - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the ValidatingAdmissionPolicy. - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the ValidatingAdmissionPolicy. Defaults to "Equivalent" Possible enum values: - "Equivalent" means requests should be sent to the webhook if they modify a resource listed in rules via another API group or version. - "Exact" means requests should only be sent to the webhook if they exactly match a given rule. namespaceSelector LabelSelector NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the policy. For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "runlevel", "operator": "NotIn", "values": [ "0", "1" ] } ] } If instead you want to only run the policy on any objects whose namespace is associated with the "environment" of "prod" or "staging"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "environment", "operator": "In", "values": [ "prod", "staging" ] } ] } See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more examples of label selectors. Default to the empty LabelSelector, which matches everything. objectSelector LabelSelector ObjectSelector decides whether to run the validation based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything. resourceRules array ResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy matches. The policy cares about an operation if it matches any Rule. resourceRules[] object NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. 5.1.7. .spec.matchConstraints.excludeResourceRules Description ExcludeResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy should not care about. The exclude rules take precedence over include rules (if a resource matches both, it is excluded) Type array 5.1.8. .spec.matchConstraints.excludeResourceRules[] Description NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 5.1.9. .spec.matchConstraints.resourceRules Description ResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy matches. The policy cares about an operation if it matches any Rule. Type array 5.1.10. .spec.matchConstraints.resourceRules[] Description NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 5.1.11. .spec.paramKind Description ParamKind is a tuple of Group Kind and Version. Type object Property Type Description apiVersion string APIVersion is the API group version the resources belong to. In format of "group/version". Required. kind string Kind is the API kind the resources belong to. Required. 5.1.12. .spec.validations Description Validations contain CEL expressions which is used to apply the validation. Validations and AuditAnnotations may not both be empty; a minimum of one Validations or AuditAnnotations is required. Type array 5.1.13. .spec.validations[] Description Validation specifies the CEL expression which is used to apply the validation. Type object Required expression Property Type Description expression string Expression represents the expression which will be evaluated by CEL. ref: https://github.com/google/cel-spec CEL expressions have access to the contents of the API request/response, organized into CEL variables as well as some other useful variables: - 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. - 'variables' - Map of composited variables, from its name to its lazily evaluated value. For example, a variable named 'foo' can be accessed as 'variables.foo'. - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. The apiVersion , kind , metadata.name and metadata.generateName are always accessible from the root of the object. No other metadata properties are accessible. Only property names of the form [a-zA-Z_.-/][a-zA-Z0-9_.-/]* are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: - ' ' escapes to ' underscores ' - '.' escapes to ' dot ' - '-' escapes to ' dash ' - '/' escapes to ' slash ' - Property names that exactly match a CEL RESERVED keyword escape to ' {keyword} '. The keywords are: "true", "false", "null", "in", "as", "break", "const", "continue", "else", "for", "function", "if", "import", "let", "loop", "package", "namespace", "return". Examples: - Expression accessing a property named "namespace": {"Expression": "object. namespace > 0"} - Expression accessing a property named "x-prop": {"Expression": "object.x dash prop > 0"} - Expression accessing a property named "redact d": {"Expression": "object.redact underscores d > 0"} Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type: - 'set': X + Y performs a union where the array positions of all elements in X are preserved and non-intersecting elements in Y are appended, retaining their partial order. - 'map': X + Y performs a merge where the array positions of all keys in X are preserved but the values are overwritten by values in Y when the key sets of X and Y intersect. Elements in Y with non-intersecting keys are appended, retaining their partial order. Required. message string Message represents the message displayed when validation fails. The message is required if the Expression contains line breaks. The message must not contain line breaks. If unset, the message is "failed rule: {Rule}". e.g. "must be a URL with the host matching spec.host" If the Expression contains line breaks. Message is required. The message must not contain line breaks. If unset, the message is "failed Expression: {Expression}". messageExpression string messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string. If both message and messageExpression are present on a validation, then messageExpression will be used if validation fails. If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and the fact that messageExpression produced an empty string/string with only spaces/string with line breaks will be logged. messageExpression has access to all the same variables as the expression except for 'authorizer' and 'authorizer.requestResource'. Example: "object.x must be less than max ("string(params.max)")" reason string Reason represents a machine-readable description of why this validation failed. If this is the first validation in the list to fail, this reason, as well as the corresponding HTTP response code, are used in the HTTP response to the client. The currently supported reasons are: "Unauthorized", "Forbidden", "Invalid", "RequestEntityTooLarge". If not set, StatusReasonInvalid is used in the response to the client. 5.1.14. .spec.variables Description Variables contain definitions of variables that can be used in composition of other expressions. Each variable is defined as a named CEL expression. The variables defined here will be available under variables in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy. The expression of a variable can refer to other variables defined earlier in the list but not those after. Thus, Variables must be sorted by the order of first appearance and acyclic. Type array 5.1.15. .spec.variables[] Description Variable is the definition of a variable that is used for composition. A variable is defined as a named expression. Type object Required name expression Property Type Description expression string Expression is the expression that will be evaluated as the value of the variable. The CEL expression has access to the same identifiers as the CEL expressions in Validation. name string Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. The variable can be accessed in other expressions through variables For example, if name is "foo", the variable will be available as variables.foo 5.1.16. .status Description ValidatingAdmissionPolicyStatus represents the status of an admission validation policy. Type object Property Type Description conditions array (Condition) The conditions represent the latest available observations of a policy's current state. observedGeneration integer The generation observed by the controller. typeChecking object TypeChecking contains results of type checking the expressions in the ValidatingAdmissionPolicy 5.1.17. .status.typeChecking Description TypeChecking contains results of type checking the expressions in the ValidatingAdmissionPolicy Type object Property Type Description expressionWarnings array The type checking warnings for each expression. expressionWarnings[] object ExpressionWarning is a warning information that targets a specific expression. 5.1.18. .status.typeChecking.expressionWarnings Description The type checking warnings for each expression. Type array 5.1.19. .status.typeChecking.expressionWarnings[] Description ExpressionWarning is a warning information that targets a specific expression. Type object Required fieldRef warning Property Type Description fieldRef string The path to the field that refers the expression. For example, the reference to the expression of the first item of validations is "spec.validations[0].expression" warning string The content of type checking information in a human-readable form. Each line of the warning contains the type that the expression is checked against, followed by the type check error from the compiler. 5.2. API endpoints The following API endpoints are available: /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies DELETE : delete collection of ValidatingAdmissionPolicy GET : list or watch objects of kind ValidatingAdmissionPolicy POST : create a ValidatingAdmissionPolicy /apis/admissionregistration.k8s.io/v1/watch/validatingadmissionpolicies GET : watch individual changes to a list of ValidatingAdmissionPolicy. deprecated: use the 'watch' parameter with a list operation instead. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/{name} DELETE : delete a ValidatingAdmissionPolicy GET : read the specified ValidatingAdmissionPolicy PATCH : partially update the specified ValidatingAdmissionPolicy PUT : replace the specified ValidatingAdmissionPolicy /apis/admissionregistration.k8s.io/v1/watch/validatingadmissionpolicies/{name} GET : watch changes to an object of kind ValidatingAdmissionPolicy. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/{name}/status GET : read status of the specified ValidatingAdmissionPolicy PATCH : partially update status of the specified ValidatingAdmissionPolicy PUT : replace status of the specified ValidatingAdmissionPolicy 5.2.1. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies HTTP method DELETE Description delete collection of ValidatingAdmissionPolicy Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ValidatingAdmissionPolicy Table 5.3. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicyList schema 401 - Unauthorized Empty HTTP method POST Description create a ValidatingAdmissionPolicy Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body ValidatingAdmissionPolicy schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 202 - Accepted ValidatingAdmissionPolicy schema 401 - Unauthorized Empty 5.2.2. /apis/admissionregistration.k8s.io/v1/watch/validatingadmissionpolicies HTTP method GET Description watch individual changes to a list of ValidatingAdmissionPolicy. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the ValidatingAdmissionPolicy HTTP method DELETE Description delete a ValidatingAdmissionPolicy Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ValidatingAdmissionPolicy Table 5.11. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ValidatingAdmissionPolicy Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ValidatingAdmissionPolicy Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body ValidatingAdmissionPolicy schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 401 - Unauthorized Empty 5.2.4. /apis/admissionregistration.k8s.io/v1/watch/validatingadmissionpolicies/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the ValidatingAdmissionPolicy HTTP method GET Description watch changes to an object of kind ValidatingAdmissionPolicy. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/{name}/status Table 5.19. Global path parameters Parameter Type Description name string name of the ValidatingAdmissionPolicy HTTP method GET Description read status of the specified ValidatingAdmissionPolicy Table 5.20. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ValidatingAdmissionPolicy Table 5.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.22. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ValidatingAdmissionPolicy Table 5.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.24. Body parameters Parameter Type Description body ValidatingAdmissionPolicy schema Table 5.25. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/extension_apis/validatingadmissionpolicy-admissionregistration-k8s-io-v1
3.2.2. Consumers of Processing Power
3.2.2. Consumers of Processing Power There are two main consumers of processing power: Applications The operating system itself 3.2.2.1. Applications The most obvious consumers of processing power are the applications and programs you want the computer to run for you. From a spreadsheet to a database, applications are the reason you have a computer. A single-CPU system can only do one thing at any given time. Therefore, if your application is running, everything else on the system is not. And the opposite is, of course, true -- if something other than your application is running, then your application is doing nothing. But how is it that many different applications can seemingly run at once under a modern operating system? The answer is that these are multitasking operating systems. In other words, they create the illusion that many different things are going on simultaneously when in fact that is not possible. The trick is to give each process a fraction of a second's worth of time running on the CPU before giving the CPU to another process for the fraction of a second. If these context switches happen frequently enough, the illusion of multiple applications running simultaneously is achieved. Of course, applications do other things than manipulate data using the CPU. They may wait for user input as well as performing I/O to devices such as disk drives and graphics displays. When these events take place, the application no longer needs the CPU. At these times, the CPU can be used for other processes running other applications without slowing the waiting application at all. In addition, the CPU can be used by another consumer of processing power: the operating system itself.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-bandwidth-processing-consumers
Chapter 5. Fixed issues
Chapter 5. Fixed issues The following sections list the issues fixed in AMQ Streams 1.6.x. Red Hat recommends that you upgrade to the latest patch release if you are using AMQ Streams 1.6.x with OpenShift Container Platform 3.11 For details of the issues fixed in: Kafka 2.6.3, refer to the Kafka 2.6.3 Release Notes Kafka 2.6.2, refer to the Kafka 2.6.2 Release Notes Kafka 2.6.1, refer to the Kafka 2.6.1 Release Notes Kafka 2.6.0, refer to the Kafka 2.6.0 Release Notes 5.1. Fixed issues for AMQ Streams 1.6.7 The AMQ Streams 1.6.7 patch release (Long Term Support) is now available. AMQ Streams 1.6.7 is the latest Long Term Support release for use with OpenShift Container Platform 3.11 only, and is supported only for as long as OpenShift Container Platform 3.11 is supported. Note that AMQ Streams 1.6.7 is supported on OCP 3.11 only. The AMQ Streams product images have been upgraded to version 1.6.7. For additional details about the issues resolved in AMQ Streams 1.6.7, see AMQ Streams 1.6.x Resolved Issues . Log4j vulnerabilities AMQ Streams includes log4j 1.2.17. The release fixes a number of log4j vulnerabilities. For more information on the vulnerabilities addressed in this release, see the following CVE articles: CVE-2022-23307 CVE-2022-23305 CVE-2022-23302 CVE-2021-4104 CVE-2020-9488 CVE-2019-17571 CVE-2017-5645 5.2. Fixed issues for AMQ Streams 1.6.6 For additional details about the issues resolved in AMQ Streams 1.6.6, see AMQ Streams 1.6.x Resolved Issues . Log4j2 vulnerabilities AMQ Streams includes log4j2 2.17.1. The release fixes a number of log4j2 vulnerabilities. For more information on the vulnerabilities addressed in this release, see the following CVE articles: CVE-2021-45046 CVE-2021-45105 CVE-2021-44832 CVE-2021-44228 5.3. Fixed issues for AMQ Streams 1.6.5 For additional details about the issues resolved in AMQ Streams 1.6.5, see AMQ Streams 1.6.x Resolved Issues . Log4j2 vulnerability The 1.6.5 release fixes a remote code execution vulnerability for AMQ Streams components that use log4j2. The vulnerability could allow a remote code execution on the server if the system logs a string value from an unauthorized source. This affects log4j versions between 2.0 and 2.14.1. For more information, see CVE-2021-44228 . 5.4. Fixed issues for AMQ Streams 1.6.4 For additional details about the issues resolved in AMQ Streams 1.6.4, see AMQ Streams 1.6.x Resolved Issues . 5.5. Fixed issues for AMQ Streams 1.6.2 The AMQ Streams 1.6.2 patch release is now available. The release includes a number fixes related to Kafka Connect. The AMQ Streams product images have not changed and remain at version 1.6. For additional details about the issues resolved in AMQ Streams 1.6.2, see AMQ Streams 1.6.2 Resolved Issues . Note Following a CVE update, the version of AMQ Streams managed by the Operator Lifecycle Manager (OLM) was changed to 1.6.1. To avoid confusion, the patch release for AMQ Streams 1.6 was given a version number of 1.6.2. 5.6. Fixed issues for AMQ Streams 1.6.0 Issue Number Description ENTMQST-2049 Kafka Bridge: Kafka consumer should be tracked with group-consumerid key ENTMQST-2289 Allow downgrade with message version older than the downgrade version ENTMQST-2292 Diff PodDisruptionBudgets before patching them to not recreate them on every reconciliation ENTMQST-2146 MirrorMaker 2 on OCP doesn't properly mirror messages with headers ENTMQST-2147 MirrorMaker 2 doesn't properly configure Jaeger tracing in the connectors ENTMQST-2099 When set to blank value for toleration Kafka cluster keeps rolling updates repeatedly ENTMQST-2084 Zookeeper version on the docs doesn't match with the version in AMQ Streams 1.5 ENTMQST-2340 Connection Leak in Operator when Using KafkaConnect API ENTMQST-2338 Fix Secrets or ConfigMaps with dots mounted into Connect ENTMQST-2294 OLM install - yaml contains typo for 'authentication' Table 5.1. Fixed common vulnerabilities and exposures (CVEs) Issue Number Description ENTMQST-2332 CVE-2020-13956 httpclient: apache-httpclient: incorrect handling of malformed authority component in request URIs [amq-st-1]
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_openshift/fixed-issues-str
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/replacing_devices/providing-feedback-on-red-hat-documentation_rhodf
Chapter 1. Support policy for Red Hat build of OpenJDK
Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.23/rn-openjdk-support-policy
Chapter 9. Bare metal drivers
Chapter 9. Bare metal drivers You can configure bare metal nodes to use one of the drivers that are enabled in the Bare Metal Provisioning service. Each driver includes a provisioning method and a power management type. Some drivers require additional configuration. Each driver described in this section uses PXE for provisioning. Drivers are listed by their power management type. You can add drivers by configuring the IronicEnabledHardwareTypes parameter in your ironic.yaml file. By default, ipmi and redfish are enabled. For the full list of supported plug-ins and drivers, see Component, Plug-In, and Driver Support in Red Hat OpenStack Platform . 9.1. Intelligent Platform Management Interface (IPMI) power management driver IPMI is an interface that provides out-of-band remote management features, including power management and server monitoring. To use this power management type, all Bare Metal Provisioning service nodes require an IPMI that is connected to the shared Bare Metal network. IPMI power manager driver uses the ipmitool utility to remotely manage hardware. You can use the following driver_info properties to configure the IPMI power manager driver for a node: Table 9.1. IPMI driver_info properties Property Description Equivalent ipmitool option ipmi_address (Mandatory) The IP address or hostname of the node. -H ipmi_username The IPMI user name. -U ipmi_password The IPMI password. The password is written to a temporary file. You pass the filename to the ipmitool by using the -f option. -f ipmi_hex_kg_key The hexadecimal Kg key for IPMIv2 authentication. -y ipmi_port The remote IPMI RMCP port. -p ipmi_priv_level IPMI privilege level. Set to one of the following valid values: ADMINISTRATOR (default) CALLBACK OPERATOR USER -L ipmi_protocol_version The version of the IPMI protocol. Set to one of the following valid values: 1.5 for lan 2.0 for lanplus (default) -I ipmi_bridging The type of bridging. Use with nested chassis management controllers (CMCs). Set to one of the following valid values: single dual no (default) n/a ipmi_target_channel Destination channel for a bridged request. Required only if ipmi_bridging is set to single or dual . -b ipmi_target_address Destination address for a bridged request. Required only if ipmi_bridging is set to single or dual . -t ipmi_transit_channel Transit channel for a bridged request. Required only if ipmi_bridging is set to dual . -B ipmi_transit_address Transit address for bridged request. Required only if ipmi_bridging is set to dual . -T ipmi_local_address Local IPMB address for bridged requests. Use only if ipmi_bridging is set to single or dual . -m ipmi_force_boot_device Set to true to specify if the Bare Metal Provisioning service should specify the boot device to the BMC each time the server is turned on. The BMC is not capable of remembering the selected boot device across power cycles. Disabled by default. n/a ipmi_disable_boot_timeout Set to false to not send a raw IPMI command to disable the 60 second timeout for booting on the node. n/a ipmi_cipher_suite The IPMI cipher suite version to use on the node. Set to one of the following valid values: 3 for AES-128 with SHA1 17 for AES-128 with SHA256 n/a 9.2. Redfish A standard RESTful API for IT infrastructure developed by the Distributed Management Task Force (DMTF). You can use the following driver_info properties to configure the Bare Metal Provisioning serive (ironic) connection to Redfish: Table 9.2. Redfish driver_info properties Property Description redfish_address (Mandatory) The IP address of the Redfish controller. The address must include the authority portion of the URL. If you do not include the scheme it defaults to https . redfish_system_id The canonical path to the system resource the Redfish driver interacts with. The path must include the root service, version, and the unique path to the system within the same authority as the redfish_address property. For example: /redfish/v1/Systems/CX34R87 . This property is only required if the target BMC manages more than one resource. redfish_username The Redfish username. redfish_password The Redfish password. redfish_verify_ca Either a Boolean value, a path to a CA_BUNDLE file, or a directory with certificates of trusted CAs. If you set this value to True the driver verifies the host certificates. If you set this value to False the driver ignores verifying the SSL certificate. If you set this value to a path, the driver uses the specified certificate or one of the certificates in the directory. The default is True . redfish_auth_type The Redfish HTTP client authentication method. Set to one of the following valid values: basic session (recommended) auto (default) - Uses the session authentication method when available, and the basic authentication method when the session method is not available. 9.3. Dell Remote Access Controller (DRAC) DRAC is an interface that provides out-of-band remote management features, including power management and server monitoring. To use this power management type, all Bare Metal Provisioning service nodes require a DRAC that is connected to the shared Bare Metal Provisioning network. Enable the idrac driver, and set the following information in the driver_info of the node: drac_address - The IP address of the DRAC NIC. drac_username - The DRAC user name. drac_password - The DRAC password. Optional: drac_port - The port to use for the WS-Management endpoint. The default is port 443 . Optional: drac_path - The path to use for the WS-Management endpoint. The default path is /wsman . Optional: drac_protocol - The protocol to use for the WS-Management endpoint. Valid values: http , https . The default protocol is https . 9.4. Integrated Remote Management Controller (iRMC) iRMC from Fujitsu is an interface that provides out-of-band remote management features including power management and server monitoring. To use this power management type on a Bare Metal Provisioning service node, the node requires an iRMC interface that is connected to the shared Bare Metal network. Enable the irmc driver, and set the following information in the driver_info of the node: irmc_address - The IP address of the iRMC interface NIC. irmc_username - The iRMC user name. irmc_password - The iRMC password. To use IPMI to set the boot mode or SCCI to get sensor data, you must complete the following additional steps: Enable the sensor method in the ironic.conf file: Replace METHOD with scci or ipmitool . If you enabled SCCI, install the python-scciclient package: Restart the Bare Metal conductor service: Note To use the iRMC driver, iRMC S4 or higher is required. 9.5. Integrated Lights-Out (iLO) iLO from Hewlett-Packard is an interface that provides out-of-band remote management features including power management and server monitoring. To use this power management type, all Bare Metal nodes require an iLO interface that is connected to the shared Bare Metal network. Enable the ilo driver, and set the following information in the driver_info of the node: ilo_address - The IP address of the iLO interface NIC. ilo_username - The iLO user name. ilo_password - The iLO password.
[ "openstack-config --set /etc/ironic/ironic.conf irmc sensor_method METHOD", "dnf install python-scciclient", "systemctl restart openstack-ironic-conductor.service" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/bare_metal_provisioning/assembly_bare-metal-drivers
Chapter 4. Accessing Red Hat Virtualization
Chapter 4. Accessing Red Hat Virtualization Red Hat Virtualization exposes a number of interfaces for interacting with the components of the virtualization environment. Many of these interfaces are fully supported. Some, however, are supported only for read access or only when your use of them has been explicitly requested by Red Hat Support. === Supported Interfaces for Read and Write Access Direct interaction with these interfaces is supported and encouraged for both read and write access: Administration Portal The Administration Portal is a graphical user interface provided by the Red Hat Virtualization Manager. It can be used to manage all the administrative resources in the environment and can be accessed by any supported web browsers. See: Administration Guide VM Portal The VM Portal is a graphical user interface provided by the Red Hat Virtualization Manager. It has limited permissions for managing virtual machine resources and is targeted at end users. See: Introduction to the VM Portal Cockpit In Red Hat Virtualization, the Cockpit web interface can be used to deploy a self-hosted engine environment, as well as perform other administrative tasks on a host. It is available by default on Red Hat Virtualization Hosts, and can be installed on Red Hat Enterprise Linux hosts. See: Installing Cockpit on Red Hat Enterprise Linux hosts . Installing Red Hat Virtualization as a self-hosted engine using the Cockpit web interface . REST API The Red Hat Virtualization REST API provides a software interface for querying and modifying the Red Hat Virtualization environment. The REST API can be used by any programming language that supports HTTP actions. See: REST API Guide Software Development Kit (SDK) The Python, Java, and Ruby SDKs are fully supported interfaces for interacting with the Red Hat Virtualization Manager. See: Python SDK Guide Java SDK Guide Ruby SDK Guide Ansible Ansible provides modules to automate post-installation tasks on Red Hat Virtualization. See: Automating Configuration Tasks using Ansible in the Administration Guide . Self-Hosted Engine Command Line Utility The hosted-engine command is used to perform administrative tasks on the Manager virtual machine in self-hosted engine environments. See: Administering the Manager Virtual Machine in the Administration Guide . VDSM Hooks VDSM hooks trigger modifications to virtual machines, based on custom properties specified in the Administration Portal. See: VDSM and Hooks in the Administration Guide . 4.1. Supported Interfaces for Read Access Direct interaction with these interfaces is supported and encouraged only for read access. Use of these interfaces for write access is not supported unless explicitly requested by Red Hat Support. Red Hat Virtualization Manager History Database :: Read access to the Red Hat Virtualization Manager history ( ovirt_engine_history ) database using the database views specified in the Data Warehouse Guide is supported. Write access is not supported. Libvirt on Hosts Read access to libvirt using the virsh -r command is a supported method of interacting with virtualization hosts. Write access is not supported. 4.2. Unsupported Interfaces Direct interaction with these interfaces is not supported unless your use of them is explicitly requested by Red Hat Support : The vdsm-client Command Use of the vdsm-client command to interact with virtualization hosts is not supported unless explicitly requested by Red Hat Support . // review above Red Hat Virtualization Manager Database Direct access to, and manipulation of, the Red Hat Virtualization Manager ( engine ) database is not supported unless explicitly requested by Red Hat Support . // review above Important Red Hat Support will not debug user-created scripts or hooks except where it can be demonstrated that there is an issue with the interface being used rather than the user-created script itself. For more general information about Red Hat's support policies see Production Support Scope of Coverage .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/product_guide/accessing-rhv
Chapter 10. Red Hat Enterprise Linux Atomic Host 7.7.1
Chapter 10. Red Hat Enterprise Linux Atomic Host 7.7.1 10.1. Atomic Host OStree update : New Tree Version: 7.7.1 (hash: 90573ae0c7b166a0e84a2d2cc836fb7b7a5ba6e9a6a3ad29a651b023aadd85c1) Changes since Tree Version 7.7.0 (hash: 9aee8a02a3ff1cc680fdc3243f537cbc285629f6744e2ea08b4c4f05b7d02127) 10.2. Extras Updated packages : atomic-1.22.1-29.gitb507039.el7 10.2.1. Container Images Updated : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux 7.7 Container Image (rhel7.7, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic OpenSCAP Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Universal Base Image 7 Container Image (rhel7/ubi7) Red Hat Universal Base Image 7 Init Container Image (rhel7/ubi7-init) Red Hat Universal Base Image 7 Minimal Container Image (rhel7/ubi7-minimal)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_7_1
Data Grid downloads
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/cache_encoding_and_marshalling/rhdg-downloads_datagrid
Chapter 9. SCSI Tapset
Chapter 9. SCSI Tapset This family of probe points is used to probe SCSI activities. It contains the following probe points:
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/scsi.stp
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/configuring_red_hat_build_of_openjdk_8_on_rhel_with_fips/making-open-source-more-inclusive
Chapter 2. Important Changes to External Kernel Parameters
Chapter 2. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 6.6. These changes include added or updated procfs entries, sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. MemAvailable Using this parameter provides an estimate of how much memory is available for starting new applications without swapping. However, unlike the data provided by the Cache or Free fields, MemAvailable takes into account page cache and also that not all reclaimable slab will be reclaimable due to items being in use. overcommit_kbytes This parameter allows the user to determine the specific number of kilobytes of physicial RAM that a committed address space is not permitted to exceed when the overcommit_memory parameter is set to "2". Therefore, overcommit_bytes works as the counterpart to the overcommit_ratio, and setting one automatically disables the other. meminfo_legacy_layout Setting this parameter to a non-zero value will disable the reporting of new entries introduced to /proc/meminfo and the kernel will keep the legacy (2.6.32) layout when reporting data through that interface. Note that the default value is "1". This parameter is available to Red Hat Enterprise Linux 6 only, for reasons of retroactive compatibility. disable_cpu_apicid This parameter allows the kdump kernel to disable BSP during boot and then to successfully boot up with multiple processors. This resolves the problem of lack of available interrupt vectors for systems with a high number of devices and ensures that kdump can now successfully capture a core dump on these systems. earlyprintk Previously usable only for VGA hardware, this parameter now supports the "efi" value, which allows users to debug early booting issues on EFI hardware. edac_report By setting the value of this parameter to "on" or "off", the user can enable or disable the Error Detection and Correction (EDAC) module to report hardware events. It is also possible to make EDAC impossible to be overridden by a higher-priority module by using the "force" value. The default value of this parameter is "on". intel_iommu This parameter enables the user to turn off the support of large pages, using the "sp_off" value. By default, however, large pages are supported as long as the Intel input/output management unit (IOMMU) meets the requirements. nfs.recover_lost_locks Previously, NFSv4 clients could resume expired or lost file locks. Nevertheless, this sometimes resulted in file corruption if the file was modified in the meantime. Therefore, recovering these locks has been disabled, but can be enabled by changing the value of the above parameter from "0" to "1". Note, however, that doing so still carries a risk of data corruption.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ch-important-changes-to-external-kernel-parameters
Chapter 4. The Migration Toolkit for Applications tools
Chapter 4. The Migration Toolkit for Applications tools You can use the following Migration Toolkit for Applications (MTA) tools for assistance in the various stages of your migration and modernization efforts: User interface Migration Toolkit for Applications Operator CLI IDE add-ons for the following applications: Eclipse Visual Studio Code, Visual Studio Codespaces, and Eclipse Che IntelliJ IDEA Maven plugin Review the details of each tool to determine which tool is suitable for your project. 4.1. The MTA Operator By using the Migration Toolkit for Applications Operator, you can install the user interface on OpenShift Container Platform versions 4.17, 4.16, 4.15. For more information about the prerequisites for the MTA Operator installation, see OpenShift Operator Life Cycles . 4.2. The MTA user interface By using the user interface for the Migration Toolkit for Applications, you can perform the following tasks: Assess the risks involved in containerizing an application for hybrid cloud environments on Red Hat OpenShift. Analyze the changes that must be made in the code of an application to containerize the application. 4.3. The MTA CLI The CLI is a command-line tool in the Migration Toolkit for Applications that you can use to assess and prioritize migration and modernization efforts for applications. It provides numerous reports that highlight the analysis without using the other tools. The CLI includes a wide array of customization options. By using the CLI, you can tune MTA analysis options or integrate with external automation tools. For more information about using the CLI, see CLI Guide . 4.4. The MTA IDE add-ons You can migrate and modernize applications by using the Migration Toolkit for Applications (MTA) add-ons for the following applications: Eclipse Visual Studio Code, Visual Studio Codespaces, and Eclipse Che IntelliJ IDEA, both the Community and Ultimate versions You can use these add-on to perform the following tasks: Analyze your projects by using customizable sets of rules. Mark issues in the source code. Fix the issues by using the provided guidance. Use the automatic code replacement, if possible.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/introduction_to_the_migration_toolkit_for_applications/about-tools_getting-started-guide
5.228. pam_pkcs11
5.228. pam_pkcs11 5.228.1. RHBA-2012:0972 - pam_pkcs11 bug fix update Updated pam_pkcs11 packages which fix a bug are now available for Red Hat Enterprise Linux 6. The pam_pkcs11 package allows X.509 certificate-based user authentication. It provides access to the certificate and its dedicated private key with an appropriate Public Key Cryptographic Standards #11 (PKCS#11) module. Bug Fix BZ# 756917 When remotely logged into a system with smart card log-in turned on, users saw the following unnecessary error message when trying to use su: The user was still able to use su despite this message. With this update the message is logged but no longer displayed. All users of pam_pkcs11 are advised to upgrade to these updated packages, which fix this bug.
[ "ERROR:pam_pkcs11.c:224: Remote login (from localhost:13.0) is not (yet) supported" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/pam_pkcs11
5.4. Configuring IPv4 Settings
5.4. Configuring IPv4 Settings Configuring IPv4 Settings with control-center Procedure Press the Super key to enter the Activities Overview, type Settings and then press Enter . Then, select the Network tab on the left-hand side, and the Network settings tool appears. Proceed to the section called "Configuring New Connections with control-center" . Select the connection that you want to edit and click on the gear wheel icon. The Editing dialog appears. Click the IPv4 menu entry. The IPv4 menu entry allows you to configure the method used to connect to a network, to enter IP address, DNS and route information as required. The IPv4 menu entry is available when you create and modify one of the following connection types: wired, wireless, mobile broadband, VPN or DSL. If you are using DHCP to obtain a dynamic IP address from a DHCP server, you can simply set Addresses to Automatic (DHCP) . If you need to configure static routes, see Section 4.3, "Configuring Static Routes with GUI" . Setting the Method for IPV4 Using nm-connection-editor You can use the nm-connection-editor to edit and configure connection settings. This procedure describes how you can configure the IPv4 settings: Procedure Enter nm-connection-editor in a terminal. For an existing connection type, click the gear wheel icon. Figure 5.2. Editing a connection Click IPv4 Settings . Figure 5.3. Configuring IPv4 Settings Available IPv4 Methods by Connection Type When you click the Method drop-down menu, depending on the type of connection you are configuring, you are able to select one of the following IPv4 connection methods. All of the methods are listed here according to which connection type, or types, they are associated with: Wired, Wireless and DSL Connection Methods Automatic (DHCP) - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses. You do not need to fill in the DHCP client ID field. Automatic (DHCP) addresses only - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses but you want to assign DNS servers manually. Manual - Choose this option if you want to assign IP addresses manually. Link-Local Only - Choose this option if the network you are connecting to does not have a DHCP server and you do not want to assign IP addresses manually. Random addresses will be assigned as per RFC 3927 with prefix 169.254/16 . Shared to other computers - Choose this option if the interface you are configuring is for sharing an Internet or WAN connection. The interface is assigned an address in the 10.42.x.1/24 range, a DHCP server and DNS server are started, and the interface is connected to the default network connection on the system with network address translation ( NAT ). Disabled - IPv4 is disabled for this connection. Mobile Broadband Connection Methods Automatic (PPP) - Choose this option if the network you are connecting to assigns your IP address and DNS servers automatically. Automatic (PPP) addresses only - Choose this option if the network you are connecting to assigns your IP address automatically, but you want to manually specify DNS servers. VPN Connection Methods Automatic (VPN) - Choose this option if the network you are connecting to assigns your IP address and DNS servers automatically. Automatic (VPN) addresses only - Choose this option if the network you are connecting to assigns your IP address automatically, but you want to manually specify DNS servers. DSL Connection Methods Automatic (PPPoE) - Choose this option if the network you are connecting to assigns your IP address and DNS servers automatically. Automatic (PPPoE) addresses only - Choose this option if the network you are connecting to assigns your IP address automatically, but you want to manually specify DNS servers. If you are using DHCP to obtain a dynamic IP address from a DHCP server, you can simply set Method to Automatic (DHCP) . If you need to configure static routes, click the Routes button and for more details on configuration options, see Section 4.3, "Configuring Static Routes with GUI" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ipv4_settings
31.4. Unloading a Module
31.4. Unloading a Module You can unload a kernel module by running modprobe -r <module_name> as root. For example, assuming that the wacom module is already loaded into the kernel, you can unload it by running: However, this command will fail if a process is using: the wacom module, a module that wacom directly depends on, or, any module that wacom -through the dependency tree-depends on indirectly. See Section 31.1, "Listing Currently-Loaded Modules" for more information about using lsmod to obtain the names of the modules which are preventing you from unloading a certain module. For example, if you want to unload the firewire_ohci module (because you believe there is a bug in it that is affecting system stability, for example), your terminal session might look similar to this: You have figured out the dependency tree (which does not branch in this example) for the loaded Firewire modules: firewire_ohci depends on firewire_core , which itself depends on crc-itu-t . You can unload firewire_ohci using the modprobe -v -r <module_name> command, where -r is short for --remove and -v for --verbose : The output shows that modules are unloaded in the reverse order that they are loaded, given that no processes depend on any of the modules being unloaded. Important Although the rmmod command can be used to unload kernel modules, it is recommended to use modprobe -r instead.
[ "~]# modprobe -r wacom", "~]# modinfo -F depends firewire_ohci depends: firewire-core ~]# modinfo -F depends firewire_core depends: crc-itu-t ~]# modinfo -F depends crc-itu-t depends:", "~]# modprobe -r -v firewire_ohci rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/firewire/firewire-ohci.ko rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/firewire/firewire-core.ko rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/lib/crc-itu-t.ko" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-unloading_a_module
Chapter 2. An overview of OpenShift Data Foundation architecture
Chapter 2. An overview of OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation provides services for, and can run internally from Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on Installer Provisioned Infrastructure or User Provisioned Infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for the Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see the interoperability matrix . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/red_hat_openshift_data_foundation_architecture/an-overview-of-openshift-data-foundation-architecture_rhodf
Chapter 13. Removing
Chapter 13. Removing The steps for removing the Red Hat build of OpenTelemetry from an OpenShift Container Platform cluster are as follows: Shut down all Red Hat build of OpenTelemetry pods. Remove any OpenTelemetryCollector instances. Remove the Red Hat build of OpenTelemetry Operator. 13.1. Removing an OpenTelemetry Collector instance by using the web console You can remove an OpenTelemetry Collector instance in the Administrator view of the web console. Prerequisites You are logged in to the web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Operators Installed Operators Red Hat build of OpenTelemetry Operator OpenTelemetryInstrumentation or OpenTelemetryCollector . To remove the relevant instance, select Delete ... Delete . Optional: Remove the Red Hat build of OpenTelemetry Operator. 13.2. Removing an OpenTelemetry Collector instance by using the CLI You can remove an OpenTelemetry Collector instance on the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Get the name of the OpenTelemetry Collector instance by running the following command: USD oc get deployments -n <project_of_opentelemetry_instance> Remove the OpenTelemetry Collector instance by running the following command: USD oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance> Optional: Remove the Red Hat build of OpenTelemetry Operator. Verification To verify successful removal of the OpenTelemetry Collector instance, run oc get deployments again: USD oc get deployments -n <project_of_opentelemetry_instance> 13.3. Additional resources Deleting Operators from a cluster Getting started with the OpenShift CLI
[ "oc login --username=<your_username>", "oc get deployments -n <project_of_opentelemetry_instance>", "oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>", "oc get deployments -n <project_of_opentelemetry_instance>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/red_hat_build_of_opentelemetry/dist-tracing-otel-removing
Chapter 2. Eclipse Temurin overview
Chapter 2. Eclipse Temurin overview Eclipse Temurin is a free and open source implementation of the Java Platform, Standard Edition (Java SE) from the Eclipse Temurin Working Group. Eclipse Temurin is based on the upstream Red Hat build of OpenJDK 8u, Red Hat build of OpenJDK 11u, and Red Hat build of OpenJDK 17u projects and includes the Shenandoah Garbage Collector from version 11 and later versions. Eclipse Temurin does not vary structurally from the upstream distribution of Red Hat build of OpenJDK. Eclipse Temurin shares the following similar capabilities as Red Hat build of OpenJDK: Multi-platform - Red Hat offers support of Eclipse Temurin on Microsoft Windows, RHEL and macOS, so that you can standardize on a single Java platform on numerous environments, such as desktop, data center, and hybrid cloud. Frequent releases - Eclipse Temurin delivers quarterly updates of JRE and JDK for the Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, and Red Hat build of OpenJDK 17 distributions. These updates are available as RPM, MSI, archive files, and containers. Long-term support (LTS) - Red Hat supports the recently released Eclipse Temurin 8, Eclipse Temurin 11, and Eclipse Temurin 17. For more information about the support lifecycle, see Red Hat build of OpenJDK Life Cycle and Support Policy .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/getting_started_with_eclipse_temurin/temurin11-overview_openjdk
Chapter 9. Technology previews
Chapter 9. Technology previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 9. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 9.1. Installer and image creation NVMe over TCP for RHEL installation is now available as a Technology Preview With this Technology Preview, you can now use NVMe over TCP volumes to install RHEL after configuring the firmware. While adding disks from the Installation Destination screen, you can select the NVMe namespaces under the NVMe Fabrics Devices section. Jira:RHEL-10216 [1] Installation of bootable OSTree native containers is now available as a Technology Preview The ostreecontainer Kickstart command is now available in Anaconda as a Technology Preview. You can use this command to install the operating system from an OSTree commit encapsulated in an OCI image. When performing Kickstart installations, the following commands are available together with ostreecontainer : graphical, text, or cmdline ostreecontainer clearpart, zerombr autopart part logvol, volgroup reboot and shutdown lang rootpw sshkey bootloader - Available only with the --append optional parameter. user When you specify a group within the user command, the user account can be assigned only to a group that already exists in the container image. Kickstart commands not listed here are allowed to be used with ostreecontainer command, however, they are not guaranteed to work as expected with package-based installations. However, the following Kickstart commands are unsupported together with ostreecontainer : %packages (any necessary packages must be already available in the container image) url (if there is a need to fetch a stage2 image for installation, for example, PXE installations, use inst.stage2= on the kernel instead of providing a url for stage2 inside the Kickstart file) liveimg vnc authconfig and authselect (provide relevant configuration in the container image instead) module repo zipl zfcp Installation of bootable OSTree native containers is not supported in interactive installations that use partial Kickstart files. Note: When customizing a mount point, you must define the mount point in the /mnt directory and ensure that the mount point directory exists inside /var/mnt in the container image. Jira:RHEL-2250 [1] Boot loader installation and configuration via bootupd / bootupctl in Anaconda is now available as a Technology Preview As the ostreecontainer Kickstart command is now available in Anaconda as a Technology Preview, you can use it to install the operating system from an OSTree commit encapsulated in an OCI image. Anaconda automatically arranges a boot loader installation and configuration via the bootupd / bootupctl tool contained within the container image, even without an explicit boot loader configuration in Kickstart. Jira:RHEL-17205 [1] The bootc image builder tool is available as a Technology Preview The bootc image builder tool, now available as a Technology Preview, works as a container to easily create and deploy compatible disk images from the bootc container inputs. After running your container image with bootc image builder , you can generate images for the architecture that you need. Then, you can deploy the resulting image on VMs, clouds, or servers. You can easily update the images with the bootc, instead of having to regenerate the content with bootc image builder every time a new update is required. Jira:RHELDOCS-17468 [1] A new rhel9/bootc-image-builder container image is available as a Technology Preview The rhel9/bootc-image-builder container image for image mode for RHEL includes a minimal version of image builder that converts bootable container images, for example rhel-bootc, to different disk image formats, such as QCOW2, AMI, VMDK, ISO, and others. Jira:RHELDOCS-17733 [1] 9.2. Security gnutls now uses kTLS as a Technology Preview The updated gnutls packages can use kernel TLS (kTLS) for accelerating data transfer on encrypted channels as a Technology Preview. To enable kTLS, add the tls.ko kernel module using the modprobe command, and create a new configuration file /etc/crypto-policies/local.d/gnutls-ktls.txt for the system-wide cryptographic policies with the following content: Note that the current version does not support updating traffic keys through TLS KeyUpdate messages, which impacts the security of AES-GCM ciphersuites. See the RFC 7841 - TLS 1.3 document for more information. Bugzilla:2108532 [1] OpenSSL clients can use the QUIC protocol as a Technology Preview OpenSSL can use the QUIC transport layer network protocol on the client side with the rebase to OpenSSL version 3.2.2 as a Technology Preview. Jira:RHELDOCS-18935 [1] The io_uring interface is available as a Technology Preview io_uring is a new and effective asynchronous I/O interface, which is now available as a Technology Preview. By default, this feature is disabled. You can enable this interface by setting the kernel.io_uring_disabled sysctl variable to any one of the following values: 0 All processes can create io_uring instances as usual. 1 io_uring creation is disabled for unprivileged processes. The io_uring_setup fails with the -EPERM error unless the calling process is privileged by the CAP_SYS_ADMIN capability. Existing io_uring instances can still be used. 2 io_uring creation is disabled for all processes. The io_uring_setup always fails with -EPERM . Existing io_uring instances can still be used. This is the default setting. An updated version of the SELinux policy to enable the mmap system call on anonymous inodes is also required to use this feature. By using the io_uring command pass-through, an application can issue commands directly to the underlying hardware, such as nvme . Jira:RHEL-11792 [1] 9.3. RHEL for Edge FDO now provides storing and querying Owner Vouchers from a SQL backend as a Technology Preview With this Technology Preview, FDO manufacturer-server , onboarding-server , and rendezvous-server are available for storing and querying Owner Vouchers from a SQL backend. As a result, you can select a SQL datastore in the FDO servers options, along with credentials and other parameters, to store the Owner Vouchers. Jira:RHELDOCS-17752 [1] 9.4. Shells and command-line tools GIMP available as a Technology Preview in RHEL 9 GNU Image Manipulation Program (GIMP) 2.99.8 is now available in RHEL 9 as a Technology Preview. The gimp package version 2.99.8 is a pre-release version with a set of improvements, but a limited set of features and no guarantee for stability. As soon as the official GIMP 3 is released, it will be introduced into RHEL 9 as an update of this pre-release version. In RHEL 9, you can install gimp easily as an RPM package. Bugzilla:2047161 [1] 9.5. Infrastructure services Socket API for TuneD available as a Technology Preview The socket API for controlling TuneD through a UNIX domain socket is now available as a Technology Preview. The socket API maps one-to-one with the D-Bus API and provides an alternative communication method for cases where D-Bus is not available. By using the socket API, you can control the TuneD daemon to optimize the performance, and change the values of various tuning parameters. The socket API is disabled by default, you can enable it in the tuned-main.conf file. Bugzilla:2113900 9.6. Networking UDP encapsulation in packet offload mode is now available as a Technology Preview With IPsec packet offload, the kernel can offload the entire IPsec encapsulation process to a NIC to reduce the workload. With this update, the packet offload has been improved by supporting User Datagram Protocol (UDP) encapsulation of ipsec tunnels when in packet offload mode. Jira:RHEL-30141 [1] WireGuard VPN is available as a Technology Preview WireGuard, which Red Hat provides as an unsupported Technology Preview, is a high-performance VPN solution that runs in the Linux kernel. It uses modern cryptography and is easier to configure than other VPN solutions. Additionally, the small code-basis of WireGuard reduces the surface for attacks and, therefore, improves the security. For further details, see Setting up a WireGuard VPN . Bugzilla:1613522 [1] kTLS available as a Technology Preview RHEL provides kernel Transport Layer Security (KTLS) as a Technology Preview. kTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. kTLS also includes the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that provides this functionality. Bugzilla:1570255 [1] The systemd-resolved service is available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, a Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that systemd-resolved is an unsupported Technology Preview. Bugzilla:2020529 The PRP and HSR protocols are now available as a Technology Preview This update adds the hsr kernel module that provides the following protocols: Parallel Redundancy Protocol (PRP) High-availability Seamless Redundancy (HSR) The IEC 62439-3 standard defines these protocols, and you can use this feature to configure zero-loss redundancy in Ethernet networks. Bugzilla:2177256 [1] NetworkManager and the Nmstate API support MACsec hardware offload You can use both NetworkManager and the Nmstate API to enable MACsec hardware offload if the hardware supports this feature. As a result, you can offload MACsec operations, such as encryption, from the CPU to the network interface controller. Note that this feature is an unsupported Technology Preview. Jira:RHEL-24337 NetworkManager enables configuring HSR and PRP interfaces High-availability Seamless Redundancy (HSR) and Parallel Redundancy Protocol (PRP) are network protocols that provide seamless failover against failure of any single network component. Both protocols are transparent to the application layer, meaning that users do not experience any disruption in communication or any loss of data, because a switch between the main path and the redundant path happens very quickly and without awareness of the user. Now it is possible to enable and configure HSR and PRP interfaces using the NetworkManager service through the nmcli utility and the DBus message system. Jira:RHEL-5852 Offloading IPsec encapsulation to a NIC is now available as a Technology Preview This update adds the IPsec packet offloading capabilities to the kernel. Previously, it was possible to only offload the encryption to a network interface controller (NIC). With this enhancement, the kernel can now offload the entire IPsec encapsulation process to a NIC to reduce the workload. Note that offloading the IPsec encapsulation process to a NIC also reduces the ability of the kernel to monitor and filter such packets. Bugzilla:2178699 [1] The Soft-iWARP driver is available as a Technology Preview Soft-iWARP (siw) is a software, Internet Wide-area RDMA Protocol (iWARP), kernel driver for Linux. Soft-iWARP implements the iWARP protocol suite over the TCP/IP network stack. This protocol suite is fully implemented in software and does not require a specific Remote Direct Memory Access (RDMA) hardware. Soft-iWARP enables a system with a standard Ethernet adapter to connect to an iWARP adapter or to another system with already installed Soft-iWARP. Bugzilla:2023416 [1] rvu_af , rvu_nicpf , and rvu_nicvf available as Technology Preview The following kernel modules are available as Technology Preview for Marvell OCTEON TX2 Infrastructure Processor family: rvu_nicpf Marvell OcteonTX2 NIC Physical Function driver rvu_nicvf Marvell OcteonTX2 NIC Virtual Function driver rvu_nicvf Marvell OcteonTX2 RVU Admin Function driver Bugzilla:2040643 [1] Network drivers for modems in RHEL are available as Technology Preview Device manufacturers support Federal Communications Commission (FCC) locking as the default setting. FCC provides a lock to bind WWAN drivers to a specific system where WWAN drivers provide a channel to communicate with modems. Based on the modem PCI ID, manufacturers integrate unlocking tools on Red Hat Enterprise Linux for ModemManager. However, a modem remains unusable if not unlocked previously even if the WWAN driver is compatible and functional. Red Hat Enterprise Linux provides the drivers for the following modems with limited functionality as a Technology Preview: Qualcomm MHI WWAM MBIM - Telit FN990Axx Intel IPC over Shared Memory (IOSM) - Intel XMM 7360 LTE Advanced Mediatek t7xx (WWAN) - Fibocom FM350GL Intel IPC over Shared Memory (IOSM) - Fibocom L860GL modem Jira:RHELDOCS-16760 [1] , Jira:RHEL-6564, Bugzilla:2110561, Bugzilla:2123542, Bugzilla:2222914 Segment Routing over IPv6 (SRv6) is available as a Technology Preview The RHEL kernel provides Segment Routing over IPv6 (SRv6) as a Technology Preview. You can use this functionality to optimize traffic flows in edge computing or to improve network programmability in data centers. However, the most significant use case is the end-to-end (E2E) network slicing in 5G deployment scenarios. In that area, the SRv6 protocol provides you with the programmable custom network slices and resource reservations to address network requirements for specific applications or services. At the same time, the solution can be deployed on a single-purpose appliance, and it satisfies the need for a smaller computational footprint. Bugzilla:2186375 [1] kTLS rebased to version 6.3 The kernel Transport Layer Security (KTLS) functionality is a Technology Preview. In RHEL 9.3, kTLS was rebased to the 6.3 upstream version, and notable changes include: Added the support for 256-bit keys with TX device offload Delivered various bug fixes Bugzilla:2183538 [1] Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol that implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which maintains two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 9. Note that Soft-RoCE is deprecated since RHEL 9.5 and will be removed in RHEL 10. 9.7. Kernel python-drgn available as a Technology Preview The python-drgn package brings an advanced debugging utility, which adds emphasis on programmability. You can use its Python command-line interface to debug both the live kernels and the kernel dumps. Additionally, python-drgn offers scripting capabilities for you to automate debugging tasks and conduct intricate analysis of the Linux kernel. Jira:RHEL-6973 [1] The IAA crypto driver is now available as a Technology Preview The Intel(R) In-Memory Analytics Accelerator (Intel(R) IAA) is a hardware accelerator that provides very high throughput compression and decompression combined with primitive analytic functions. The iaa_crypto driver, which offloads compression and decompression operations from the CPU, has been introduced in RHEL 9.4 as a Technology Preview. It supports compression and decompression compatible with the DEFLATE compression standard described in RFC 1951. The iaa_crypto driver is designed to work as a layer underneath higher-level compression devices such as zswap . For details about the IAA crypto driver, see: Intel(R) In-Memory Analytics Accelerator (Intel(R) IAA) User Guide IAA Compression Accelerator Crypto Driver Jira:RHEL-20145 [1] 9.8. File systems and storage NVMe-oF Discovery Service features are now fully supported The NVMe-oF Discovery Service features, defined in the NVMexpress.org Technical Proposals (TP) 8013 and 8014 was introduced in Red Hat Enterprise Linux 9.0 as a Technology Preview, is now fully supported. To preview these features, use the nvme-cli 2.0 package and attach the host to an NVMe-oF target device that implements TP-8013 or TP-8014. For more information about TP-8013 and TP-8014, see the NVM Express 2.0 Ratified TPs from the https://nvmexpress.org/specifications/ website. Bugzilla:2021672 [1] nvme-stas package available as a Technology Preview The nvme-stas package, which is a Central Discovery Controller (CDC) client for Linux, is now available as a Technology Preview. It handles Asynchronous Event Notifications (AEN), Automated NVMe subsystem connection controls, Error handling and reporting, and Automatic ( zeroconf ) and Manual configuration. This package consists of two daemons, Storage Appliance Finder ( stafd ) and Storage Appliance Connector ( stacd ). Bugzilla:1893841 [1] 9.9. Dynamic programming languages, web and database servers 9.10. Compilers and development tools jmc-core and owasp-java-encoder available as a Technology Preview RHEL 9 is distributed with the jmc-core and owasp-java-encoder packages as Technology Preview features for the AMD and Intel 64-bit architectures. jmc-core is a library providing core APIs for Java Development Kit (JDK) Mission Control, including libraries for parsing and writing JDK Flight Recording files, and libraries for Java Virtual Machine (JVM) discovery through Java Discovery Protocol (JDP). The owasp-java-encoder package provides a collection of high-performance low-overhead contextual encoders for Java. Note that since RHEL 9.2, jmc-core and owasp-java-encoder are available in the CodeReady Linux Builder (CRB) repository, which you must explicitly enable. See How to enable and make use of content within CodeReady Linux Builder for more information. Bugzilla:1980981 libabigail : Flexible array conversion warning-suppression available as a Technology Preview As a Technology Preview, when comparing binaries, you can suppress warnings related to fake flexible arrays that were converted to true flexible arrays by using the following suppression specification: Jira:RHEL-16629 [1] 9.11. Identity Management DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now implement DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2 Secure Domain Name System (DNS) Deployment Guide DNSSEC Key Rollover Timing Considerations Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. Bugzilla:2084180 HSM support is available as a Technology Preview Hardware Security Module (HSM) support is now available in Identity Management (IdM) as a Technology Preview. You can store your key pairs and certificates for your IdM CA and KRA on an HSM. This adds physical security to the private key material. IdM relies on the networking features of the HSM to share the keys between machines to create replicas. The HSM provides additional security without visibly affecting most IPA operations. When using low-level tooling the certificates and keys are handled differently but this is seamless for most users. Note Migration of an existing CA or KRA to an HSM-based setup is not supported. You need to reinstall the CA or KRA with keys on the HSM. You need the following: A supported HSM The HSM PKCS #11 library An available slot, token, and the token password To install a CA or KRA with keys stored on an HSM, you must specify the token name and the path to the PKCS #11 library. For example: Jira:RHELDOCS-17465 [1] LMDB database type is available in Directory Server as a Technology Preview The Lightning Memory-Mapped Database (LMDB) is available in Directory Server as an unsupported Technology Preview. Currently, you can use only the command line to migrate or install instances with LMDB. To migrate existing instances from Berkeley Database (BDB) to LMDB, use the dsctl instance_name dblib bdb2mdb command that sets the nsslapd-backend-implement parameter value to mdb . Note that this command does not clean up the old data. You can revert the database type by changing nsslapd-backend-implement back to bdb . For more details, see Migrating the database type from BDB to LMDB on an existing DS instance . Important Before migrating existing instances from BDB to LMDB, backup your databases. For more details, see Backing up Directory Server . To create a new instance with the LMDB, you can use either of the following methods: In the interactive installer, set mdb in the Choose whether mdb or bdb is used line. For more details, see Creating an instance using the interactive installer . In the .inf file, set db_lib = mdb in the [slapd] section. For more details, see Creating a .inf file for a Directory Server instance installation . Directory Server stores LMDB settings under the cn=mdb,cn=config,cn=ldbm database,cn=plugins,cn=config entry that includes with the following new configuration parameters: nsslapd-mdb-max-size sets the database maximum size in bytes. Important: Make sure that nsslapd-mdb-max-size is high enough to store all intended data. However, the parameter size must not be too high to impact the performance because the database file is memory-mapped. nsslapd-mdb-max-readers sets the maximum number of read operations that can be opened at the same time. Directory Server autotunes this setting. nsslapd-mdb-max-dbs sets the maximum number of named database instances that can be included within the memory-mapped database file. Along with the new LMDB settings, you can still use the nsslapd-db-home-directory database configuration parameter. In case of mixed implementations, you can have BDB and LMDB replicas in your replication topology. Jira:RHELDOCS-19061 [1] ACME supports automatically removing expired certificates as a Technology Preview The Automated Certificate Management Environment (ACME) service in Identity Management (IdM) adds an automatic mechanism to purge expired certificates from the certificate authority (CA) as a Technology Preview. As a result, ACME can now automatically remove expired certificates at specified intervals. Removing expired certificates is disabled by default. To enable it, enter: With this enhancement, ACME can now automatically remove expired certificates at specified intervals. Removing expired certificates is disabled by default. To enable it, enter: This removes expired certificates on the first day of every month at midnight. Note Expired certificates are removed after their retention period. By default, this is 30 days after expiry. For more details, see the ipa-acme-manage(1) man page. Jira:RHELPLAN-145900 IdM-to-IdM migration is available as a Technology Preview IdM-to-IdM migration is available in Identity Management as a Technology Preview. You can use a new ipa-migrate command to migrate all IdM-specific data, such as SUDO rules, HBAC, DNA ranges, hosts, services, and more, to another IdM server. This can be useful, for example, when moving IdM from a development or staging environment into a production one or when migrating IdM data between two production servers. Jira:RHELDOCS-18408 [1] 9.12. Desktop GNOME for the 64-bit ARM architecture available as a Technology Preview The GNOME desktop environment is available for the 64-bit ARM architecture as a Technology Preview. You can now connect to the desktop session on a 64-bit ARM server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on 64-bit ARM. For example: The Mozilla Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Mozilla Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27394 [1] GNOME for the IBM Z architecture available as a Technology Preview The GNOME desktop environment is available for the IBM Z architecture as a Technology Preview. You can now connect to the desktop session on an IBM Z server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on IBM Z. For example: The Mozilla Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Mozilla Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27737 [1] 9.13. The web console The RHEL web console can now manage WireGuard connections Starting with RHEL 9.4, you can use the RHEL web console to create and manage WireGuard VPN connections. Note that, both the WireGuard technology and its web console integration are unsupported Technology Previews. Jira:RHELDOCS-17520 [1] 9.14. Virtualization Creating nested virtual machines Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on Intel, AMD64, and IBM Z hosts with RHEL 9. With this feature, a RHEL 7, RHEL 8, or RHEL 9 VM that runs on a physical RHEL 9 host can act as a hypervisor, and host its own VMs. Jira:RHELDOCS-17040 [1] AMD SEV, SEV-ES, and SEV-SNP for KVM virtual machines As a Technology Preview, RHEL 9 provides the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts the VM's memory to protect the VM from access by the host. This increases the security of the VM. In addition, the enhanced Encrypted State version of SEV (SEV-ES) is also provided as Technology Preview. SEV-ES encrypts all CPU register contents when a VM stops running. This prevents the host from modifying the VM's CPU registers or reading any information from them. RHEL 9.5 and later also provides the Secure Nested Paging (SEV-SNP) feature as Technology Preview. SNP enhances SEV and SEV-ES by improving its memory integrity protection, which helps prevent hypervisor-based attacks, such as data replay or memory re-mapping. Note that SEV and SEV-ES work only on the 2nd generation of AMD EPYC CPUs (codenamed Rome) or later. Similarly, SEV-SNP works only on 4rd generation AMD EPYC CPUs (codenamed Genoa) or later. Also note that RHEL 9 includes SEV, SEV-ES, and SEV-SNP encryption, but not the SEV, SEV-ES, and SEV-SNP security attestation and live migration. Jira:RHELPLAN-65217 [1] Intel TDX in RHEL guests As a Technology Preview, the Intel Trust Domain Extension (TDX) feature can now be used in RHEL 9.2 and later guest operating systems. If the host system supports TDX, you can deploy hardware-isolated RHEL 9 virtual machines (VMs), called trust domains (TDs). Note, however, that TDX currently does not work with kdump , and enabling TDX will cause kdump to fail on the VM. Bugzilla:1955275 [1] A unified kernel image of RHEL is now available as a Technology Preview As a Technology Preview, you can now obtain the RHEL kernel as a unified kernel image (UKI) for virtual machines (VMs). A unified kernel image combines the kernel, initramfs, and kernel command line into a single signed binary file. UKIs can be used in virtualized and cloud environments, especially in confidential VMs where strong SecureBoot capabilities are required. The UKI is available as a kernel-uki-virt package in RHEL 9 repositories. Currently, the RHEL UKI can only be used in a UEFI boot configuration. Bugzilla:2142102 [1] CPU clusters on 64-bit ARM As a Technology Preview, you can now create KVM virtual machines that use multiple 64-bit ARM CPU clusters in their CPU topology. Jira:RHEL-7043 [1] Live migrating a VM with a Mellanox virtual function is now available as a Technology Preview As a Technology Preview, you can now live migrate a virtual machine (VM) with an attached virtual function (VF) of a Mellanox networking device. This feature is currently available only on a Mellanox CX-7 networking device. The VF on the Mellanox CX-7 networking device uses a new mlx5_vfio_pci driver, which adds functionality that is necessary for the live migration, and libvirt binds the new driver to the VF automatically. Jira:RHEL-13007 [1] 9.15. RHEL in cloud environments RHEL is now available on Azure confidential VMs as a Technology Preview With the updated RHEL kernel, you can now create and run RHEL confidential virtual machines (VMs) on Microsoft Azure as a Technology Preview. The newly added unified kernel image (UKI) now enables booting encrypted confidential VM images on Azure. The UKI is available as a kernel-uki-virt package in RHEL 9 repositories. Currently, the RHEL UKI can only be used in a UEFI boot configuration. Jira:RHELPLAN-139800 [1] 9.16. Containers composefs filesystem is available as a Technology Preview composefs is the default backend for container storage. The key technologies composefs uses are: OverlayFS as the kernel interface Enhanced Read-Only File System (EROFS) for a mountable metadata tree The fs-verity feature (optional) from the lower filesystem Key advantages of composefs : Separation between metadata and data. composefs does not store any persistent data. The underlying metadata and data files are stored in a valid lower Linux filesystem such as ext4 , xfs , btrfs , and so on. Mounting multiple composefs with a shared storage. Data files are shared in the page cache to enable multiple container images to share their memory. Support fs-verity validation of the content files. Jira:RHEL-52237 The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. Jira:RHELDOCS-16861 [1] A new rhel9/rhel-bootc container image is available as a Technology Preview The rhel9/rhel-bootc container image is now available in the Red Hat Container Registry as a Technology Preview. With the RHEL bootable container images, you can build, test, and deploy an operating system exactly as a container. The RHEL bootable container images differ from the existing application Universal Base Images (UBI) thanks to the following enhancements: RHEL bootable container images contain additional components necessary to boot, such as, kernel, initrd, boot loader, firmware, between others. There are no changes to existing container images. For more information, see Red Hat Ecosystem Catalog . Jira:RHELDOCS-17803 [1] Pushing and pulling images compressed with zstd:chunked is available as a Technology Preview The zstd:chunked compression is now available as a Technology Preview. Jira:RHEL-32267
[ "[global] ktls = true", "[suppress_type] type_kind = struct has_size_change = true has_strict_flexible_array_data_member_conversion = true", "ipa-server-install -r EXAMPLE.TEST -U --setup-dns --allow-zone-overlap --no-forwarders -N --auto-reverse --random-serial-numbers --token-name=HSM-TOKEN --token-library-path=/opt/nfast/toolkits/pkcs11/libcknfast.so --setup-kra", "ipa-acme-manage pruning --enable --cron \"0 0 1 * *\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.5_release_notes/technology-previews
Lightspeed
Lightspeed OpenShift Container Platform 4.15 About Lightspeed Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/lightspeed/index
Preface
Preface Red Hat Quay is an enterprise-quality container registry. Use Red Hat Quay to build and store container images, then make them available to deploy across your enterprise. The Red Hat Quay Operator provides a simple method to deploy and manage Red Hat Quay on an OpenShift cluster. With the release of Red Hat Quay 3.4.0, the Red Hat Quay Operator was re-written to offer an enhanced experience and to add more support for Day 2 operations. As a result, the Red Hat Quay Operator is now simpler to use and is more opinionated. The key difference from versions prior to Red Hat Quay 3.4.0 include the following: The QuayEcosystem custom resource has been replaced with the QuayRegistry custom resource. The default installation options produces a fully supported Red Hat Quay environment, with all managed dependencies, such as database, caches, object storage, and so on, supported for production use. Note Some components might not be highly available. A new validation library for Red Hat Quay's configuration. Object storage can now be managed by the Red Hat Quay Operator using the ObjectBucketClaim Kubernetes API Note Red Hat OpenShift Data Foundation can be used to provide a supported implementation of this API on OpenShift Container Platform. Customization of the container images used by deployed pods for testing and development scenarios.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/pr01
29.8. OProfile Support for Java
29.8. OProfile Support for Java OProfile allows you to profile dynamically compiled code (also known as "just-in-time" or JIT code) of the Java Virtual Machine (JVM). OProfile in Red Hat Enterprise Linux 6 includes built-in support for the JVM Tools Interface (JVMTI) agent library, which supports Java 1.5 and higher. 29.8.1. Profiling Java Code To profile JIT code from the Java Virtual Machine with the JVMTI agent, add the following to the JVM startup parameters: Note The oprofile-jit package must be installed on the system in order to profile JIT code with OProfile. To learn more about Java support in OProfile, see the OProfile Manual, which is linked from Section 29.11, "Additional Resources" .
[ "-agentlib:jvmti_oprofile" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-oprofile-java-support
Chapter 2. CatalogSource [operators.coreos.com/v1alpha1]
Chapter 2. CatalogSource [operators.coreos.com/v1alpha1] Description CatalogSource is a repository of CSVs, CRDs, and operator packages. Type object Required metadata spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Required sourceType Property Type Description address string Address is a host that OLM can use to connect to a pre-existing registry. Format: <registry-host or ip>:<port> Only used when SourceType = SourceTypeGrpc. Ignored when the Image field is set. configMap string ConfigMap is the name of the ConfigMap to be used to back a configmap-server registry. Only used when SourceType = SourceTypeConfigmap or SourceTypeInternal. description string displayName string Metadata grpcPodConfig object GrpcPodConfig exposes different overrides for the pod spec of the CatalogSource Pod. Only used when SourceType = SourceTypeGrpc and Image is set. icon object image string Image is an operator-registry container image to instantiate a registry-server with. Only used when SourceType = SourceTypeGrpc. If present, the address field is ignored. priority integer Priority field assigns a weight to the catalog source to prioritize them so that it can be consumed by the dependency resolver. Usage: Higher weight indicates that this catalog source is preferred over lower weighted catalog sources during dependency resolution. The range of the priority value can go from positive to negative in the range of int32. The default value to a catalog source with unassigned priority would be 0. The catalog source with the same priority values will be ranked lexicographically based on its name. publisher string runAsRoot boolean RunAsRoot allows admins to indicate that they wish to run the CatalogSource pod in a privileged pod as root. This should only be enabled when running older catalog images which could not be run as non-root. secrets array (string) Secrets represent set of secrets that can be used to access the contents of the catalog. It is best to keep this list small, since each will need to be tried for every catalog entry. sourceType string SourceType is the type of source updateStrategy object UpdateStrategy defines how updated catalog source images can be discovered Consists of an interval that defines polling duration and an embedded strategy type 2.1.2. .spec.grpcPodConfig Description GrpcPodConfig exposes different overrides for the pod spec of the CatalogSource Pod. Only used when SourceType = SourceTypeGrpc and Image is set. Type object Property Type Description affinity object Affinity is the catalog source's pod's affinity. extractContent object ExtractContent configures the gRPC catalog Pod to extract catalog metadata from the provided index image and use a well-known version of the opm server to expose it. The catalog index image that this CatalogSource is configured to use must be using the file-based catalogs in order to utilize this feature. memoryTarget integer-or-string MemoryTarget configures the USDGOMEMLIMIT value for the gRPC catalog Pod. This is a soft memory limit for the server, which the runtime will attempt to meet but makes no guarantees that it will do so. If this value is set, the Pod will have the following modifications made to the container running the server: - the USDGOMEMLIMIT environment variable will be set to this value in bytes - the memory request will be set to this value This field should be set if it's desired to reduce the footprint of a catalog server as much as possible, or if a catalog being served is very large and needs more than the default allocation. If your index image has a file- system cache, determine a good approximation for this value by doubling the size of the package cache at /tmp/cache/cache/packages.json in the index image. This field is best-effort; if unset, no default will be used and no Pod memory limit or USDGOMEMLIMIT value will be set. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. priorityClassName string If specified, indicates the pod's priority. If not specified, the pod priority will be default or zero if there is no default. securityContextConfig string SecurityContextConfig can be one of legacy or restricted . The CatalogSource's pod is either injected with the right pod.spec.securityContext and pod.spec.container[*].securityContext values to allow the pod to run in Pod Security Admission (PSA) restricted mode, or doesn't set these values at all, in which case the pod can only be run in PSA baseline or privileged namespaces. If the SecurityContextConfig is unspecified, the mode will be determined by the namespace's PSA configuration. If the namespace is enforcing restricted mode, then the pod will be configured as if restricted was specified. Otherwise, it will be configured as if legacy was specified. Specifying a value other than legacy or restricted result in a validation error. When using older catalog images, which can not run in restricted mode, the SecurityContextConfig should be set to legacy . More information about PSA can be found here: https://kubernetes.io/docs/concepts/security/pod-security-admission/' tolerations array Tolerations are the catalog source's pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 2.1.3. .spec.grpcPodConfig.affinity Description Affinity is the catalog source's pod's affinity. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 2.1.4. .spec.grpcPodConfig.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 2.1.5. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 2.1.6. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 2.1.7. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.8. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.9. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.10. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 2.1.11. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.12. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 2.1.13. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 2.1.14. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.15. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.16. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.17. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 2.1.18. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.19. .spec.grpcPodConfig.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.20. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.21. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.22. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.23. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.24. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.25. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.26. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.27. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.28. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.29. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.30. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.31. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.32. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.33. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.34. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.35. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.36. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.37. .spec.grpcPodConfig.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.38. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.39. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.40. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.41. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.42. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.43. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.44. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.45. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.46. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.47. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.48. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.49. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.50. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.51. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.52. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.53. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.54. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.55. .spec.grpcPodConfig.extractContent Description ExtractContent configures the gRPC catalog Pod to extract catalog metadata from the provided index image and use a well-known version of the opm server to expose it. The catalog index image that this CatalogSource is configured to use must be using the file-based catalogs in order to utilize this feature. Type object Required cacheDir catalogDir Property Type Description cacheDir string CacheDir is the directory storing the pre-calculated API cache. catalogDir string CatalogDir is the directory storing the file-based catalog contents. 2.1.56. .spec.grpcPodConfig.tolerations Description Tolerations are the catalog source's pod's tolerations. Type array 2.1.57. .spec.grpcPodConfig.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 2.1.58. .spec.icon Description Type object Required base64data mediatype Property Type Description base64data string mediatype string 2.1.59. .spec.updateStrategy Description UpdateStrategy defines how updated catalog source images can be discovered Consists of an interval that defines polling duration and an embedded strategy type Type object Property Type Description registryPoll object 2.1.60. .spec.updateStrategy.registryPoll Description Type object Property Type Description interval string Interval is used to determine the time interval between checks of the latest catalog source version. The catalog operator polls to see if a new version of the catalog source is available. If available, the latest image is pulled and gRPC traffic is directed to the latest catalog source. 2.1.61. .status Description Type object Property Type Description conditions array Represents the state of a CatalogSource. Note that Message and Reason represent the original status information, which may be migrated to be conditions based in the future. Any new features introduced will use conditions. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } configMapReference object ConfigMapReference (deprecated) is the reference to the ConfigMap containing the catalog source's configuration, when the catalog source is a ConfigMap connectionState object ConnectionState represents the current state of the CatalogSource's connection to the registry latestImageRegistryPoll string The last time the CatalogSource image registry has been polled to ensure the image is up-to-date message string A human readable message indicating details about why the CatalogSource is in this condition. reason string Reason is the reason the CatalogSource was transitioned to its current state. registryService object RegistryService represents the current state of the GRPC service used to serve the catalog 2.1.62. .status.conditions Description Represents the state of a CatalogSource. Note that Message and Reason represent the original status information, which may be migrated to be conditions based in the future. Any new features introduced will use conditions. Type array 2.1.63. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 2.1.64. .status.configMapReference Description ConfigMapReference (deprecated) is the reference to the ConfigMap containing the catalog source's configuration, when the catalog source is a ConfigMap Type object Required name namespace Property Type Description lastUpdateTime string name string namespace string resourceVersion string uid string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. 2.1.65. .status.connectionState Description ConnectionState represents the current state of the CatalogSource's connection to the registry Type object Required lastObservedState Property Type Description address string lastConnect string lastObservedState string 2.1.66. .status.registryService Description RegistryService represents the current state of the GRPC service used to serve the catalog Type object Property Type Description createdAt string port string protocol string serviceName string serviceNamespace string 2.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/catalogsources GET : list objects of kind CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources DELETE : delete collection of CatalogSource GET : list objects of kind CatalogSource POST : create a CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name} DELETE : delete a CatalogSource GET : read the specified CatalogSource PATCH : partially update the specified CatalogSource PUT : replace the specified CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name}/status GET : read status of the specified CatalogSource PATCH : partially update status of the specified CatalogSource PUT : replace status of the specified CatalogSource 2.2.1. /apis/operators.coreos.com/v1alpha1/catalogsources HTTP method GET Description list objects of kind CatalogSource Table 2.1. HTTP responses HTTP code Reponse body 200 - OK CatalogSourceList schema 401 - Unauthorized Empty 2.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources HTTP method DELETE Description delete collection of CatalogSource Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CatalogSource Table 2.3. HTTP responses HTTP code Reponse body 200 - OK CatalogSourceList schema 401 - Unauthorized Empty HTTP method POST Description create a CatalogSource Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body CatalogSource schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 202 - Accepted CatalogSource schema 401 - Unauthorized Empty 2.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name} Table 2.7. Global path parameters Parameter Type Description name string name of the CatalogSource HTTP method DELETE Description delete a CatalogSource Table 2.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CatalogSource Table 2.10. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CatalogSource Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.12. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CatalogSource Table 2.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.14. Body parameters Parameter Type Description body CatalogSource schema Table 2.15. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 401 - Unauthorized Empty 2.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name}/status Table 2.16. Global path parameters Parameter Type Description name string name of the CatalogSource HTTP method GET Description read status of the specified CatalogSource Table 2.17. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CatalogSource Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.19. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CatalogSource Table 2.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.21. Body parameters Parameter Type Description body CatalogSource schema Table 2.22. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operatorhub_apis/catalogsource-operators-coreos-com-v1alpha1
Chapter 3. Listing available buckets in your object store
Chapter 3. Listing available buckets in your object store To list buckets that are available in your object store, use the list_bucket() method. Prerequisites You have cloned the odh-doc-examples repository to your workbench. You have opened the s3client_examples.ipynb file in your workbench. You have installed Boto3 and configured the S3 client. Procedure In the notebook, locate the following instructions that lists available buckets and then run the code cell. A successful response includes an HTTP request status code of 200 and a list of buckets, similar to the following output: Locate the instructions that prints only the names of available buckets and execute the code cell. The output displays the names of the buckets, similar to the following example.
[ "#List available buckets s3_client.list_buckets()", "'HTTPStatusCode': 200, 'Buckets': [{'Name': 'aqs086-image-registry', 'CreationDate': datetime.datetime(2024, 1, 16, 20, 21, 36, 244000, tzinfo=tzlocal( ))},", "#Print only names of available buckets for bucket in s3_client.list_buckets()['Buckets']: print(bucket['Name'])", "aqs086-image-registry aqs087-image-registry aqs135-image-registry aqs246-image-registry" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/listing-available-amazon-buckets_s3
Part IV. Advanced Installation Options
Part IV. Advanced Installation Options This part of the Red Hat Enterprise Linux Installation Guide covers more advanced or uncommon methods of installing Red Hat Enterprise Linux, including: customizing the installation program's behavior by specifying boot options setting up a PXE server to boot the installation program over a network installing with remote access through VNC using a Kickstart file to automate the installation process installing into a disk image instead of a physical drive upgrading a release of Red Hat Enterprise Linux to the current version
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/part-advanced-installation-options
Chapter 13. Atom Component
Chapter 13. Atom Component Available as of Camel version 1.2 The atom: component is used for polling Atom feeds. Camel will poll the feed every 60 seconds by default. Note: The component currently only supports polling (consuming) feeds. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-atom</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 13.1. URI format atom://atomUri[?options] Where atomUri is the URI to the Atom feed to poll. 13.2. Options The Atom component has no options. The Atom endpoint is configured using URI syntax: with the following path and query parameters: 13.2.1. Path Parameters (1 parameters): Name Description Default Type feedUri Required The URI to the feed to poll. String 13.2.2. Query Parameters (27 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean feedHeader (consumer) Sets whether to add the feed object as a header true boolean filter (consumer) Sets whether to use filtering or not of the entries. true boolean lastUpdate (consumer) Sets the timestamp to be used for filtering entries from the atom feeds. This options is only in conjunction with the splitEntries. Date password (consumer) Sets the password to be used for basic authentication when polling from a HTTP feed String sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean sortEntries (consumer) Sets whether to sort entries by published date. Only works when splitEntries = true. false boolean splitEntries (consumer) Sets whether or not entries should be sent individually or whether the entire feed should be sent as a single message true boolean throttleEntries (consumer) Sets whether all entries identified in a single feed poll should be delivered immediately. If true, only one entry is processed per consumer.delay. Only applicable when splitEntries = true. true boolean username (consumer) Sets the username to be used for basic authentication when polling from a HTTP feed String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 13.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.atom.enabled Enable atom component true Boolean camel.component.atom.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean You can append query options to the URI in the following format, ?option=value&option=value&... 13.4. Exchange data format Camel will set the In body on the returned Exchange with the entries. Depending on the splitEntries flag Camel will either return one Entry or a List<Entry> . Option Value Behavior splitEntries true Only a single entry from the currently being processed feed is set: exchange.in.body(Entry) splitEntries false The entire list of entries from the feed is set: exchange.in.body(List<Entry>) Camel can set the Feed object on the In header (see feedHeader option to disable this): 13.5. Message Headers Camel atom uses these headers. Header Description CamelAtomFeed When consuming the org.apache.abdera.model.Feed object is set to this header. 13.6. Samples In this sample we poll James Strachan's blog. from("atom://http://macstrac.blogspot.com/feeds/posts/default").to("seda:feeds"); In this sample we want to filter only good blogs we like to a SEDA queue. The sample also shows how to setup Camel standalone, not running in any Container or using Spring. 13.7. See Also Configuring Camel Component Endpoint Getting Started RSS
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-atom</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "atom://atomUri[?options]", "atom:feedUri", "from(\"atom://http://macstrac.blogspot.com/feeds/posts/default\").to(\"seda:feeds\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/atom-component
Chapter 17. Supported kdump configurations and targets
Chapter 17. Supported kdump configurations and targets The kdump mechanism is a feature of the Linux kernel that generates a crash dump file when a kernel crash occurs. The kernel dump file has critical information that helps to analyze and determine the root cause of a kernel crash. The crash can be because of various factors, hardware issues or third-party kernel modules problems, to name a few. By using the provided information and procedures, you can perform the following actions: Identify the supported configurations and targets for your RHEL 9 systems. Configure kdump. Verify kdump operation. 17.1. Memory requirements for kdump For kdump to capture a kernel crash dump and save it for further analysis, a part of the system memory should be permanently reserved for the capture kernel. When reserved, this part of the system memory is not available to the main kernel. The memory requirements vary based on certain system parameters. One of the major factors is the system's hardware architecture. To identify the exact machine architecture, such as Intel 64 and AMD64, also known as x86_64, and print it to standard output, use the following command: With the stated list of minimum memory requirements, you can set the appropriate memory size to automatically reserve a memory for kdump on the latest available versions. The memory size depends on the system's architecture and total available physical memory. Table 17.1. Minimum amount of reserved memory required for kdump Architecture Available Memory Minimum Reserved Memory AMD64 and Intel 64 ( x86_64 ) 1 GB to 4 GB 192 MB of RAM 4 GB to 64 GB 256 MB of RAM 64 GB and more 512 MB of RAM 64-bit ARM (4k pages) 1 GB to 4 GB 256 MB of RAM 4 GB to 64 GB 320 MB of RAM 64 GB and more 576 MB of RAM 64-bit ARM (64k pages) 1 GB to 4 GB 356 MB of RAM 4 GB to 64 GB 420 MB of RAM 64 GB and more 676 MB of RAM IBM Power Systems ( ppc64le ) 2 GB to 4 GB 384 MB of RAM 4 GB to 16 GB 512 MB of RAM 16 GB to 64 GB 1 GB of RAM 64 GB to 128 GB 2 GB of RAM 128 GB and more 4 GB of RAM IBM Z ( s390x ) 1 GB to 4 GB 192 MB of RAM 4 GB to 64 GB 256 MB of RAM 64 GB and more 512 MB of RAM On many systems, kdump is able to estimate the amount of required memory and reserve it automatically. This behavior is enabled by default, but only works on systems that have more than a certain amount of total available memory, which varies based on the system architecture. Important The automatic configuration of reserved memory based on the total amount of memory in the system is a best effort estimation. The actual required memory might vary due to other factors such as I/O devices. Using not enough of memory might cause debug kernel unable to boot as a capture kernel in the case of kernel panic. To avoid this problem, increase the crash kernel memory sufficiently. Additional resources Red Hat Enterprise Linux Technology Capabilities and Limits 17.2. Minimum threshold for automatic memory reservation By default, the kexec-tools utility configures the crashkernel command line parameter and reserves a certain amount of memory for kdump . On some systems however, it is still possible to assign memory for kdump either by using the crashkernel=auto parameter in the boot loader configuration file, or by enabling this option in the graphical configuration utility. For this automatic reservation to work, a certain amount of total memory needs to be available in the system. The memory requirement varies based on the system's architecture. If the system memory is less than the specified threshold value, you must configure the memory manually. Table 17.2. Minimum amount of memory required for automatic memory reservation Architecture Required Memory AMD64 and Intel 64 ( x86_64 ) 1 GB IBM Power Systems ( ppc64le ) 2 GB IBM Z ( s390x ) 1 GB 64-bit ARM 1 GB Note The crashkernel=auto option in the boot command line is no longer supported on RHEL 9 and later releases. 17.3. Supported kdump targets When a kernel crash occurs, the operating system saves the dump file on the configured or default target location. You can save the dump file either directly to a device, store as a file on a local file system, or send the dump file over a network. With the following list of dump targets, you can know the targets that are currently supported or not supported by kdump . Table 17.3. kdump targets on RHEL 9 Target type Supported Targets Unsupported Targets Physical Storage Logical Volume Manager (LVM). Thin provisioning volume. Fibre Channel (FC) disks such as qla2xxx , lpfc , bnx2fc , and bfa . An iSCSI software-configured logical device on a networked storage server. The mdraid subsystem as a software RAID solution. Hardware RAID such as smartpqi , hpsa , megaraid , mpt3sas , aacraid , and mpi3mr . SCSI and SATA disks. iSCSI and HBA offloads. Hardware FCoE such as qla2xxx and lpfc . Software FCoE such as bnx2fc . For software FCoE to function, additional memory configuration might be required. BIOS RAID. Software iSCSI with iBFT . Currently supported transports are bnx2i , cxgb3i , and cxgb4i . Software iSCSI with hybrid device driver such as be2iscsi . Fibre Channel over Ethernet ( FCoE ). Legacy IDE. GlusterFS servers. GFS2 file system. Clustered Logical Volume Manager (CLVM). High availability LVM volumes (HA-LVM). Network Hardware using kernel modules such as igb , ixgbe , ice , i40e , e1000e , igc , tg3 , bnx2x , bnxt_en , qede , cxgb4 , be2net , enic , sfc , mlx4_en , mlx5_core , r8169 , atlantic , nfp , and nicvf on 64-bit ARM architecture only. Hardware using kernel modules such as sfc SRIOV , cxgb4vf , and pch_gbe . IPv6 protocol. Wireless connections. InfiniBand networks. VLAN network over bridge and team. Hypervisor Kernel-based virtual machines (KVM). Xen hypervisor in certain configurations only. ESXi 6.6, 6.7, 7.0. Hyper-V 2012 R2 on RHEL Gen1 UP Guest only and later version. Filesystem The ext[234]fs , XFS , virtiofs , and NFS file systems. The Btrfs file system. Firmware BIOS-based systems. UEFI Secure Boot. Additional resources Configuring the kdump target 17.4. Supported kdump filtering levels To reduce the size of the dump file, kdump uses the makedumpfile core collector to compress the data and also exclude unwanted information, for example, you can remove hugepages and hugetlbfs pages by using the -8 level. The levels that makedumpfile currently supports can be seen in the table for Filtering levels for `kdump` . Table 17.4. Filtering levels for kdump Option Description 1 Zero pages 2 Cache pages 4 Cache private 8 User pages 16 Free pages Additional resources Configuring the kdump core collector 17.5. Supported default failure responses By default, when kdump fails to create a core dump, the operating system reboots. However, you can configure kdump to perform a different operation in case it fails to save the core dump to the primary target. dump_to_rootfs Attempt to save the core dump to the root file system. This option is especially useful in combination with a network target: if the network target is unreachable, this option configures kdump to save the core dump locally. The system is rebooted afterwards. reboot Reboot the system, losing the core dump in the process. halt Halt the system, losing the core dump in the process. poweroff Power off the system, losing the core dump in the process. shell Run a shell session from within the initramfs, allowing the user to record the core dump manually. final_action Enable additional operations such as reboot , halt , and poweroff actions after a successful kdump or when shell or dump_to_rootfs failure action completes. The default final_action option is reboot . failure_action Specifies the action to perform when a dump might fail in the event of a kernel crash. The default failure_action option is reboot . Additional resources Configuring the kdump default failure responses 17.6. Using final_action parameter When kdump succeeds or if kdump fails to save the vmcore file at the configured target, you can perform additional operations like reboot , halt , and poweroff by using the final_action parameter. If the final_action parameter is not specified, reboot is the default response. Procedure To configure final_action , edit the /etc/kdump.conf file and add one of the following options: final_action reboot final_action halt final_action poweroff Restart the kdump service for the changes to take effect. 17.7. Using failure_action parameter The failure_action parameter specifies the action to perform when a dump fails in the event of a kernel crash. The default action for failure_action is reboot that reboots the system. The parameter recognizes the following actions to take: reboot Reboots the system after a dump failure. dump_to_rootfs Saves the dump file on a root file system when a non-root dump target is configured. halt Halts the system. poweroff Stops the running operations on the system. shell Starts a shell session inside initramfs , from which you can manually perform additional recovery actions. Procedure: To configure an action to take if the dump fails, edit the /etc/kdump.conf file and specify one of the failure_action options: failure_action reboot failure_action halt failure_action poweroff failure_action shell failure_action dump_to_rootfs Restart the kdump service for the changes to take effect.
[ "uname -m", "kdumpctl restart", "kdumpctl restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/supported-kdump-configurations-and-targets_managing-monitoring-and-updating-the-kernel
21.5. Monitoring the Local Disk for Graceful Shutdown
21.5. Monitoring the Local Disk for Graceful Shutdown See the Monitoring the Local Disk for Graceful Shutdown section in the Red Hat Directory Server Performance Tuning Guide .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/diskmonitoring
Chapter 31. Set Up Cross-Datacenter Replication
Chapter 31. Set Up Cross-Datacenter Replication In Red Hat JBoss Data Grid, Cross-Datacenter Replication allows the administrator to create data backups in multiple clusters. These clusters can be at the same physical location or different ones. JBoss Data Grid's Cross-Site Replication implementation is based on JGroups' RELAY2 protocol. Cross-Datacenter Replication ensures data redundancy across clusters. Ideally, each of these clusters must be in a different physical location than the others. Report a bug 31.1. Cross-Datacenter Replication Operations Red Hat JBoss Data Grid's Cross-Datacenter Replication operation is explained through the use of an example, as follows: Example 31.1. Cross-Datacenter Replication Example Figure 31.1. Cross-Datacenter Replication Example Three sites are configured in this example: LON , NYC and SFO . Each site hosts a running JBoss Data Grid cluster made up of three to four physical nodes. The Users cache is active in all three sites - LON , NYC and SFO . Changes to the Users cache at the any one of these sites will be replicated to the other two as long as the cache defines the other two sites as its backups through configuration. The Orders cache, however, is only available locally at the LON site because it is not replicated to the other sites. The Users cache can use different replication mechanisms each site. For example, it can back up data synchronously to SFO and asynchronously to NYC and LON . The Users cache can also have a different configuration from one site to another. For example, it can be configured as a distributed cache with numOwners set to 2 in the LON site, as a replicated cache in the NYC site and as a distributed cache with numOwners set to 1 in the SFO site. JGroups is used for communication within each site as well as inter-site communication. Specifically, a JGroups protocol called RELAY2 facilitates communication between sites. For more information, see Section F.4, "About RELAY2" Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-set_up_cross-datacenter_replication
function::sigset_mask_str
function::sigset_mask_str Name function::sigset_mask_str - Returns the string representation of a sigset Synopsis Arguments mask the sigset to convert to string.
[ "sigset_mask_str:string(mask:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sigset-mask-str
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/monitoring_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf
Appendix A. Using your subscription
Appendix A. Using your subscription Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired Streams for Apache Kafka product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2025-03-05 17:09:37 UTC
[ "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_bridge/using_your_subscription
Chapter 7. Viewing the status of the QuayRegistry object
Chapter 7. Viewing the status of the QuayRegistry object Lifecycle observability for a given Red Hat Quay deployment is reported in the status section of the corresponding QuayRegistry object. The Red Hat Quay Operator constantly updates this section, and this should be the first place to look for any problems or state changes in Red Hat Quay or its managed dependencies. 7.1. Viewing the registry endpoint Once Red Hat Quay is ready to be used, the status.registryEndpoint field will be populated with the publicly available hostname of the registry. 7.2. Viewing the version of Red Hat Quay in use The current version of Red Hat Quay that is running will be reported in status.currentVersion . 7.3. Viewing the conditions of your Red Hat Quay deployment Certain conditions will be reported in status.conditions .
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-quayregistry-status
Chapter 9. What to do next?
Chapter 9. What to do ? This is only the beginning of what Red Hat Ceph Storage can do to help you meet the challenging storage demands of the modern data center. Here are links to more information on a variety of topics: Benchmarking performance and accessing performance counters, see the Benchmarking Performance chapter in the Administration Guide for Red Hat Ceph Storage 4. Creating and managing snapshots, see the Snapshots chapter in the Block Device Guide for Red Hat Ceph Storage 4. Expanding the Red Hat Ceph Storage cluster, see the Managing the storage cluster size chapter in the Operations Guide for Red Hat Ceph Storage 4. Mirroring Ceph Block Devices, see the Block Device Mirroring chapter in the Block Device Guide for Red Hat Ceph Storage 4. Process management, see the Process Management chapter in the Administration Guide for Red Hat Ceph Storage 4. Tunable parameters, see the Configuration Guide for Red Hat Ceph Storage 4. Using Ceph as the back end storage for OpenStack, see the Back-ends section in the Storage Guide for Red Hat OpenStack Platform. Monitor the health and capacity of the Red Hat Ceph Storage cluster with the Ceph Dashboard. See the Dashboard Guide for additional details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/what_to_do_next
Chapter 3. Deploy standalone Multicloud Object Gateway
Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_google_cloud/deploy-standalone-multicloud-object-gateway
Monitoring OpenShift Data Foundation
Monitoring OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.14 View cluster health, metrics, or set alerts. Red Hat Storage Documentation Team Abstract Read this document for instructions on monitoring Red Hat OpenShift Data Foundation using the Block and File, and Object dashboards. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. Cluster health 1.1. Verifying OpenShift Data Foundation is healthy Storage health is visible on the Block and File and Object dashboards. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Check if the Status card has a green tick in the Block and File and the Object tabs. Green tick indicates that the cluster is healthy. See Section 1.2, "Storage health levels and cluster state" for information about the different health states and the alerts that appear. 1.2. Storage health levels and cluster state Status information and alerts related to OpenShift Data Foundation are displayed in the storage dashboards. 1.2.1. Block and File dashboard indicators The Block and File dashboard shows the complete state of OpenShift Data Foundation and the state of persistent volumes. The states that are possible for each resource type are listed in the following table. Table 1.1. OpenShift Data Foundation health levels State Icon Description UNKNOWN OpenShift Data Foundation is not deployed or unavailable. Green Tick Cluster health is good. Warning OpenShift Data Foundation cluster is in a warning state. In internal mode, an alert is displayed along with the issue details. Alerts are not displayed for external mode. Error OpenShift Data Foundation cluster has encountered an error and some component is nonfunctional. In internal mode, an alert is displayed along with the issue details. Alerts are not displayed for external mode. 1.2.2. Object dashboard indicators The Object dashboard shows the state of the Multicloud Object Gateway and any object claims in the cluster. The states that are possible for each resource type are listed in the following table. Table 1.2. Object Service health levels State Description Green Tick Object storage is healthy. Multicloud Object Gateway is not running Shown when NooBaa system is not found. All resources are unhealthy Shown when all NooBaa pools are unhealthy. Many buckets have issues Shown when >= 50% of buckets encounter error(s). Some buckets have issues Shown when >= 30% of buckets encounter error(s). Unavailable Shown when network issues and/or errors exist. 1.2.3. Alert panel The Alert panel appears below the Status card in both the Block and File dashboard and the Object dashboard when the cluster state is not healthy. Information about specific alerts and how to respond to them is available in Troubleshooting OpenShift Data Foundation . Chapter 2. Multicluster storage health To view the overall storage health status across all the clusters with OpenShift Data Foundation and manage its capacity, you must first enable the multicluster dashboard on the Hub cluster. 2.1. Enabling multicluster dashboard on Hub cluster You can enable the multicluster dashboard on the install screen either before or after installing ODF Multicluster Orchestrator with the console plugin. Prerequisites Ensure that you have installed OpenShift Container Platform version 4.14 and have administrator privileges. Ensure that you have installed Multicluster Orchestrator 4.14 operator with plugin for console enabled. Ensure that you have installed Red Hat Advanced Cluster Management for Kubernetes (RHACM) 2.9 from Operator Hub. For instructions on how to install, see Installing RHACM . Ensure you have enabled observability on RHACM. See Enabling observability guidelines . Procedure Create the configmap file named observability-metrics-custom-allowlist.yaml and add the name of the custom metric to the metrics_list.yaml parameter. You can use the following YAML to list the OpenShift Data Foundation metrics on Hub cluster. For details, see Adding custom metrics . Run the following command in the open-cluster-management-observability namespace: After observability-metrics-custom-allowlist yaml is created, RHACM will start collecting the listed OpenShift Data Foundation metrics from all the managed clusters. If you want to exclude specific managed clusters from collecting the observability data, add the following cluster label to your clusters: observability: disabled . To view the multicluster health, see chapter verifying multicluster storage dashboard . 2.2. Verifying multicluster storage health on hub cluster Prerequisites Ensure that you have enabled multicluster monitoring. For instructions, see chapter Enabling multicluster dashboard . Procedure In the OpenShift web console of Hub cluster, ensure All Clusters is selected. Navigate to Data Services and click Storage System . On the Overview tab, verify that there are green ticks in front of OpenShift Data Foundation and Systems . This indicates that the operator is running and all storage systems are available. In the Status card, Click OpenShift Data Foundation to view the operator status. Click Systems to view the storage system status. The Storage system capacity card shows the following details: Name of the storage system Cluster name Graphical representation of total and used capacity in percentage Actual values for total and used capacity in TiB Chapter 3. Metrics 3.1. Metrics in the Block and File dashboard You can navigate to the Block and File dashboard in the OpenShift Web Console as follows: Click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Click the Block and File tab. The following cards on the Block and File dashboard provide the metrics based on deployment mode (internal or external): Details card The Details card shows the following: Service name Cluster name The name of the Provider on which the system runs, for example, AWS , VSphere , None for Bare metal. Mode (deployment mode as either Internal or External) OpenShift Data Foundation operator version In-transit encryption (shows whether the encryption is enabled or disabled) Storage Efficiency card This card shows the compression ratio that represents a compressible data effectiveness metric, which includes all the compression-enabled pools. This card also shows the savings metric that represents the actual disk capacity saved, which includes all the compression-enabled pools and associated replicas. Inventory card The Inventory card shows the total number of active nodes, disks, pools, storage classes, PVCs, and deployments backed by OpenShift Data Foundation provisioner. Note For external mode, the number of nodes will be 0 by default as there are no dedicated nodes for OpenShift Data Foundation. Status card This card shows whether the cluster is up and running without any errors or is experiencing some issues. For internal mode, Data Resiliency indicates the status of data re-balancing in Ceph across the replicas. When the internal mode cluster is in a warning or error state, the Alerts section is shown along with the relevant alerts. For external mode, Data Resiliency and alerts are not displayed Raw Capacity card This card shows the total raw storage capacity, which includes replication on the cluster. Used legend indicates space used raw storage capacity on the cluster Available legend indicates the available raw storage capacity on the cluster Note This card is not applicable for external mode clusters. Requested Capacity This card shows the actual amount of non-replicated data stored in the cluster and its distribution. You can choose between Projects, Storage Classes, Pods, and Peristent Volume Claims from the drop-down list on the top of the card. You need to select a namespace for the Persistent Volume Claims option. These options are for filtering the data shown in the graph. The graph displays the requested capacity for only the top five entities based on usage. The aggregate requested capacity of the remaining entities is displayed as Other. Option Display Projects The aggregated capacity of each project which is using the OpenShift Data Foundation and how much is being used. Storage Classes The aggregate capacity which is based on the OpenShift Data Foundation based storage classes. Pods All the pods that are trying to use the PVC that are backed by OpenShift Data Foundation provisioner. PVCs All the PVCs in the namespace that you selected from the dropdown list and that are mounted on to an active pod. PVCs that are not attached to pods are not included. For external mode, see the Capacity breakdown card . Capacity breakdown card This card is only applicable for external mode clusters. In this card, you can view a graphic breakdown of capacity per project, storage classes, and pods. You can choose between Projects, Storage Classes, and Pods from the drop-down menu on the top of the card. These options are for filtering the data shown in the graph. The graph displays the used capacity for only the top five entities based on usage. The aggregate usage of the remaining entities is displayed as Other . Utilization card The card shows used capacity, input/output operations per second, latency, throughput, and recovery information for the internal mode cluster. For external mode, this card shows only the used and requested capacity details for that cluster. Activity card This card shows the current and the past activities of the OpenShift Data Foundation cluster. The card is separated into two sections: Ongoing : Displays the progress of ongoing activities related to rebuilding of data resiliency and upgrading of OpenShift Data Foundation operator. Recent Events : Displays the list of events that happened in the openshift-storage namespace. 3.2. Metrics in the Object dashboard You can navigate to the Object dashboard in the OpenShift Web Console as follows: Click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Click the Object tab. The following metrics are available in the Object dashboard: Details card This card shows the following information: Service Name : The Multicloud Object Gateway (MCG) service name. System Name : The Multicloud Object Gateway and RADOS Object Gateway system names. The Multicloud Object Gateway system name is also a hyperlink to the MCG management user interface. Provider : The name of the provider on which the system runs (example: AWS , VSphere , None for Baremetal) Version : OpenShift Data Foundation operator version. Storage Efficiency card In this card you can view how the MCG optimizes the consumption of the storage backend resources through deduplication and compression and provides you with a calculated efficiency ratio (application data vs logical data) and an estimated savings figure (how many bytes the MCG did not send to the storage provider) based on capacity of bare metal and cloud based storage and egress of cloud based storage. Buckets card Buckets are containers maintained by the MCG and RADOS Object Gateway to store data on behalf of the applications. These buckets are created and accessed through object bucket claims (OBCs). A specific policy can be applied to a bucket to customize data placement, data spill-over, data resiliency, capacity quotas, and so on. In this card, information about object buckets (OB) and object bucket claims (OBCs) is shown separately. OB includes all the buckets that are created using S3 or the user interface (UI) and OBC includes all the buckets created using YAMLs or the command line interface (CLI). The number displayed on the left of the bucket type is the total count of OBs or OBCs. The number displayed on the right shows the error count and is visible only when the error count is greater than zero. You can click on the number to see the list of buckets that has the warning or error status. Resource Providers card This card displays a list of all Multicloud Object Gateway and RADOS Object Gateway resources that are currently in use. Those resources are used to store data according to the buckets policies and can be a cloud-based resource or a bare metal resource. Status card This card shows whether the system and its services are running without any issues. When the system is in a warning or error state, the alerts section is shown and the relevant alerts are displayed there. Click the alert links beside each alert for more information about the issue. For information about health checks, see Cluster health . If multiple object storage services are available in the cluster, click the service type (such as Object Service or Data Resiliency ) to see the state of the individual services. Data resiliency in the status card indicates if there is any resiliency issue regarding the data stored through the Multicloud Object Gateway and RADOS Object Gateway. Capacity breakdown card In this card you can visualize how applications consume the object storage through the Multicloud Object Gateway and RADOS Object Gateway. You can use the Service Type drop-down to view the capacity breakdown for the Multicloud Gateway and Object Gateway separately. When viewing the Multicloud Object Gateway, you can use the Break By drop-down to filter the results in the graph by either Projects or Bucket Class . Performance card In this card, you can view the performance of the Multicloud Object Gateway or RADOS Object Gateway. Use the Service Type drop-down to choose which you would like to view. For Multicloud Object Gateway accounts, you can view the I/O operations and logical used capacity. For providers, you can view I/O operation, physical and logical usage, and egress. The following tables explain the different metrics that you can view based on your selection from the drop-down menus on the top of the card: Table 3.1. Indicators for Multicloud Object Gateway Consumer types Metrics Chart display Accounts I/O operations Displays read and write I/O operations for the top five consumers. The total reads and writes of all the consumers is displayed at the bottom. This information helps you monitor the throughput demand (IOPS) per application or account. Accounts Logical Used Capacity Displays total logical usage of each account for the top five consumers. This helps you monitor the throughput demand per application or account. Providers I/O operations Displays the count of I/O operations generated by the MCG when accessing the storage backend hosted by the provider. This helps you understand the traffic in the cloud so that you can improve resource allocation according to the I/O pattern, thereby optimizing the cost. Providers Physical vs Logical usage Displays the data consumption in the system by comparing the physical usage with the logical usage per provider. This helps you control the storage resources and devise a placement strategy in line with your usage characteristics and your performance requirements while potentially optimizing your costs. Providers Egress The amount of data the MCG retrieves from each provider (read bandwidth originated with the applications). This helps you understand the traffic in the cloud to improve resource allocation according to the egress pattern, thereby optimizing the cost. For the RADOS Object Gateway, you can use the Metric drop-down to view the Latency or Bandwidth . Latency : Provides a visual indication of the average GET/PUT latency imbalance across RADOS Object Gateway instances. Bandwidth : Provides a visual indication of the sum of GET/PUT bandwidth across RADOS Object Gateway instances. Activity card This card displays what activities are happening or have recently happened in the OpenShift Data Foundation cluster. The card is separated into two sections: Ongoing : Displays the progress of ongoing activities related to rebuilding of data resiliency and upgrading of OpenShift Data Foundation operator. Recent Events : Displays the list of events that happened in the openshift-storage namespace. 3.3. Pool metrics The Pool metrics dashboard provides information to ensure efficient data consumption, and how to enable or disable compression if less effective. Viewing pool metrics To view the pool list: Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click BlockPools . When you click on a pool name, the following cards on each Pool dashboard is displayed along with the metrics based on deployment mode (internal or external): Details card The Details card shows the following: Pool Name Volume type Replicas Status card This card shows whether the pool is up and running without any errors or is experiencing some issues. Mirroring card When the mirroring option is enabled, this card shows the mirroring status, image health, and last checked time-stamp. The mirroring metrics are displayed when cluster level mirroring is enabled. The metrics help to prevent disaster recovery failures and notify of any discrepancies so that the data is kept intact. The mirroring card shows high-level information such as: Mirroring state as either enabled or disabled for the particular pool. Status of all images under the pool as replicating successfully or not. Percentage of images that are replicating and not replicating. Inventory card The Inventory card shows the number of storage classes and Persistent Volume Claims. Compression card This card shows the compression status as enabled or disabled. It also displays the storage efficiency details as follows: Compression eligibility that indicates what portion of written compression-eligible data is compressible (per ceph parameters) Compression ratio of compression-eligible data Compression savings provides the total savings (including replicas) of compression-eligible data For information on how to enable or disable compression for an existing pool, see Updating an existing pool . Raw Capacity card This card shows the total raw storage capacity which includes replication on the cluster. Used legend indicates storage capacity used by the pool Available legend indicates the available raw storage capacity on the cluster Performance card In this card, you can view the usage of I/O operations and throughput demand per application or account. The graph indicates the average latency or bandwidth across the instances. 3.4. Network File System metrics The Network File System (NFS) metrics dashboard provides enhanced observability for NFS mounts such as the following: Mount point for any exported NFS shares Number of client mounts A breakdown statistics of the clients that are connected to help determine internal versus the external client mounts Grace period status of the Ganesha server Health statuses of the Ganesha server Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. Ensure that NFS is enabled. Procedure You can navigate to the Network file system dashboard in the OpenShift Web Console as follows: Click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Click the Network file system tab. This tab is available only when NFS is enabled. Note When you enable or disable NFS from the command-line interface, you must perform a hard refresh to display or hide the Network file system tab in the dashboard. The following NFS metrics are displayed: Status Card This card shows the status of the server based on the total number of active worker threads. Non-zero threads specify healthy status. Throughput Card This card shows the throughput of the server, which is the summation of the total request bytes and total response bytes for both read and write operations of the server. Top client Card This card shows the throughput of clients, which is the summation of the total of the response bytes sent by a client and the total request bytes by a client for both read and write operations. It shows the top three of such clients. 3.5. Enabling metadata on RBD and CephFS volumes You can set the persistent volume claim (PVC), persistent volume (PV), and Namespace names in the RADOS block device (RBD) and CephFS volumes for monitoring purposes. This enables you to read the RBD and CephFS metadata to identify the mapping between the OpenShift Container Platform and RBD and CephFS volumes. To enable RADOS block device (RBD) and CephFS volume metadata feature, you need to set the CSI_ENABLE_METADATA variable in the rook-ceph-operator-config configmap . By default, this feature is disabled. If you enable the feature after upgrading from a version, the existing PVCs will not contain the metadata. Also, when you enable the metadata feature, the PVCs that were created before enabling will not have the metadata. Prerequisites Ensure to install ocs_operator and create a storagecluster for the operator. Ensure that the storagecluster is in Ready state. Procedure Edit the rook-ceph operator ConfigMap to mark CSI_ENABLE_METADATA to true . Wait for the respective CSI CephFS plugin provisioner pods and CSI RBD plugin pods to reach the Running state. Note Ensure that the setmetadata variable is automatically set after the metadata feature is enabled. This variable should not be available when the metadata feature is disabled. Verification steps To verify the metadata for RBD PVC: Create a PVC. Check the status of the PVC. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. There are four metadata on this image: To verify the metadata for RBD clones: Create a clone. Check the status of the clone. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for RBD Snapshots: Create a snapshot. Check the status of the snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. Verify the metadata for RBD Restore: Restore a volume snapshot. Check the status of the restored volume snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for CephFS PVC: Create a PVC. Check the status of the PVC. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for CephFS clone: Create a clone. Check the status of the clone. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for CephFS volume snapshot: Create a volume snapshot. Check the status of the volume snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata of the CephFS Restore: Restore a volume snapshot. Check the status of the restored volume snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. Chapter 4. Alerts 4.1. Setting up alerts For internal Mode clusters, various alerts related to the storage metrics services, storage cluster, disk devices, cluster health, cluster capacity, and so on are displayed in the Block and File , and the Object dashboards. These alerts are not available for external Mode. Note It might take a few minutes for alerts to be shown in the alert panel because only firing alerts are visible in this panel. You can also view alerts with additional details and customize the display of Alerts in the OpenShift Container Platform. For more information, see Managing alerts . Chapter 5. Remote health monitoring OpenShift Data Foundation collects anonymized aggregated information about the health, usage, and size of clusters and reports it to Red Hat via an integrated component called Telemetry. This information allows Red Hat to improve OpenShift Data Foundation and to react to issues that impact customers more quickly. A cluster that reports data to Red Hat via Telemetry is considered a connected cluster . 5.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. These metrics are sent continuously and describe: The size of an OpenShift Data Foundation cluster The health and status of OpenShift Data Foundation components The health and status of any upgrade being performed Limited usage information about OpenShift Data Foundation components and features Summary info about alerts reported by the cluster monitoring component This continuous stream of data is used by Red Hat to monitor the health of clusters in real time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Data Foundation upgrades to customers so as to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and engineering teams with the same restrictions as accessing data reported via support cases. All connected cluster information is used by Red Hat to help make OpenShift Data Foundation better and more intuitive to use. None of the information is shared with third parties. 5.2. Information collected by Telemetry Primary information collected by Telemetry includes: The size of the Ceph cluster in bytes : "ceph_cluster_total_bytes" , The amount of the Ceph cluster storage used in bytes : "ceph_cluster_total_used_raw_bytes" , Ceph cluster health status : "ceph_health_status" , The total count of object storage devices (OSDs) : "job:ceph_osd_metadata:count" , The total number of OpenShift Data Foundation Persistent Volumes (PVs) present in the Red Hat OpenShift Container Platform cluster : "job:kube_pv:count" , The total input/output operations per second (IOPS) (reads+writes) value for all the pools in the Ceph cluster : "job:ceph_pools_iops:total" , The total IOPS (reads+writes) value in bytes for all the pools in the Ceph cluster : "job:ceph_pools_iops_bytes:total" , The total count of the Ceph cluster versions running : "job:ceph_versions_running:count" The total number of unhealthy NooBaa buckets : "job:noobaa_total_unhealthy_buckets:sum" , The total number of NooBaa buckets : "job:noobaa_bucket_count:sum" , The total number of NooBaa objects : "job:noobaa_total_object_count:sum" , The count of NooBaa accounts : "noobaa_accounts_num" , The total usage of storage by NooBaa in bytes : "noobaa_total_usage" , The total amount of storage requested by the persistent volume claims (PVCs) from a particular storage provisioner in bytes: "cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" , The total amount of storage used by the PVCs from a particular storage provisioner in bytes: "cluster:kubelet_volume_stats_used_bytes:provisioner:sum" . Telemetry does not collect identifying information such as user names, passwords, or the names or addresses of user resources.
[ "kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist Namespace: open-cluster-management-observability data: metrics_list.yaml: | names: - odf_system_health_status - odf_system_map - odf_system_raw_capacity_total_bytes - odf_system_raw_capacity_used_bytes matches: - __name__=\"csv_succeeded\",exported_namespace=\"openshift-storage\",name=~\"odf-operator.*\"", "oc apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml", "oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 57m Ready 2022-08-30T06:52:58Z 4.12.0", "oc patch cm rook-ceph-operator-config -n openshift-storage -p USD'data:\\n \"CSI_ENABLE_METADATA\": \"true\"' configmap/rook-ceph-operator-config patched", "oc get pods | grep csi csi-cephfsplugin-b8d6c 2/2 Running 0 56m csi-cephfsplugin-bnbg9 2/2 Running 0 56m csi-cephfsplugin-kqdw4 2/2 Running 0 56m csi-cephfsplugin-provisioner-7dcd78bb9b-q6dxb 5/5 Running 0 56m csi-cephfsplugin-provisioner-7dcd78bb9b-zc4q5 5/5 Running 0 56m csi-rbdplugin-776dl 3/3 Running 0 56m csi-rbdplugin-ffl52 3/3 Running 0 56m csi-rbdplugin-jx9mz 3/3 Running 0 56m csi-rbdplugin-provisioner-5f6d766b6c-694fx 6/6 Running 0 56m csi-rbdplugin-provisioner-5f6d766b6c-vzv45 6/6 Running 0 56m", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-rbd EOF", "oc get pvc | grep rbd-pvc rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 32s", "[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012", "Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 csi.storage.k8s.io/pvc/name rbd-pvc csi.storage.k8s.io/pvc/namespace openshift-storage", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-clone spec: storageClassName: ocs-storagecluster-ceph-rbd dataSource: name: rbd-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF", "oc get pvc | grep rbd-pvc rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 15m rbd-pvc-clone Bound pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 1Gi RWO ocs-storagecluster-ceph-rbd 52s", "[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-063b982d-2845-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 csi.storage.k8s.io/pvc/name rbd-pvc-clone csi.storage.k8s.io/pvc/namespace openshift-storage", "cat <<EOF | oc create -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: rbd-pvc-snapshot spec: volumeSnapshotClassName: ocs-storagecluster-rbdplugin-snapclass source: persistentVolumeClaimName: rbd-pvc EOF volumesnapshot.snapshot.storage.k8s.io/rbd-pvc-snapshot created", "oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE rbd-pvc-snapshot true rbd-pvc 1Gi ocs-storagecluster-rbdplugin-snapclass snapcontent-b992b782-7174-4101-8fe3-e6e478eb2c8f 17s 18s", "[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/volumesnapshot/name rbd-pvc-snapshot csi.storage.k8s.io/volumesnapshot/namespace openshift-storage csi.storage.k8s.io/volumesnapshotcontent/name snapcontent-b992b782-7174-4101-8fe3-e6e478eb2c8f", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-restore spec: storageClassName: ocs-storagecluster-ceph-rbd dataSource: name: rbd-pvc-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF persistentvolumeclaim/rbd-pvc-restore created", "oc get pvc | grep rbd db-noobaa-db-pg-0 Bound pvc-615e2027-78cd-4ea2-a341-fdedd50c5208 50Gi RWO ocs-storagecluster-ceph-rbd 51m rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 47m rbd-pvc-clone Bound pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 1Gi RWO ocs-storagecluster-ceph-rbd 32m rbd-pvc-restore Bound pvc-f900e19b-3924-485c-bb47-01b84c559034 1Gi RWO ocs-storagecluster-ceph-rbd 111s", "[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-5f6e0737-2849-11ed-94bd-0a580a830012 csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-5f6e0737-2849-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-f900e19b-3924-485c-bb47-01b84c559034 csi.storage.k8s.io/pvc/name rbd-pvc-restore csi.storage.k8s.io/pvc/namespace openshift-storage", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-cephfs EOF", "get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 11s", "ceph fs volume ls [ { \"name\": \"ocs-storagecluster-cephfilesystem\" } ] ceph fs subvolumegroup ls ocs-storagecluster-cephfilesystem [ { \"name\": \"csi\" } ] ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { \"name\": \"csi-vol-25266061-284c-11ed-95e0-0a580a810215\" } ] ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/pv/name\": \"pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9\", \"csi.storage.k8s.io/pvc/name\": \"cephfs-pvc\", \"csi.storage.k8s.io/pvc/namespace\": \"openshift-storage\" }", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-clone spec: storageClassName: ocs-storagecluster-cephfs dataSource: name: cephfs-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF persistentvolumeclaim/cephfs-pvc-clone created", "oc get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 9m5s cephfs-pvc-clone Bound pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce 1Gi RWX ocs-storagecluster-cephfs 20s", "[rook@rook-ceph-tools-c99fd8dfc-6sdbg /]USD ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { \"name\": \"csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215\" }, { \"name\": \"csi-vol-25266061-284c-11ed-95e0-0a580a810215\" } ] [rook@rook-ceph-tools-c99fd8dfc-6sdbg /]USD ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/pv/name\": \"pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce\", \"csi.storage.k8s.io/pvc/name\": \"cephfs-pvc-clone\", \"csi.storage.k8s.io/pvc/namespace\": \"openshift-storage\" }", "cat <<EOF | oc create -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: cephfs-pvc-snapshot spec: volumeSnapshotClassName: ocs-storagecluster-cephfsplugin-snapclass source: persistentVolumeClaimName: cephfs-pvc EOF volumesnapshot.snapshot.storage.k8s.io/cephfs-pvc-snapshot created", "oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE cephfs-pvc-snapshot true cephfs-pvc 1Gi ocs-storagecluster-cephfsplugin-snapclass snapcontent-f0f17463-d13b-4e13-b44e-6340bbb3bee0 9s 9s", "ceph fs subvolume snapshot ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 --group_name csi [ { \"name\": \"csi-snap-06336f4e-284e-11ed-95e0-0a580a810215\" } ] ceph fs subvolume snapshot metadata ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 csi-snap-06336f4e-284e-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/volumesnapshot/name\": \"cephfs-pvc-snapshot\", \"csi.storage.k8s.io/volumesnapshot/namespace\": \"openshift-storage\", \"csi.storage.k8s.io/volumesnapshotcontent/name\": \"snapcontent-f0f17463-d13b-4e13-b44e-6340bbb3bee0\" }", "cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-restore spec: storageClassName: ocs-storagecluster-cephfs dataSource: name: cephfs-pvc-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF persistentvolumeclaim/cephfs-pvc-restore created", "oc get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 29m cephfs-pvc-clone Bound pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce 1Gi RWX ocs-storagecluster-cephfs 20m cephfs-pvc-restore Bound pvc-43d55ea1-95c0-42c8-8616-4ee70b504445 1Gi RWX ocs-storagecluster-cephfs 21s", "ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { \"name\": \"csi-vol-3536db13-2850-11ed-95e0-0a580a810215\" }, { \"name\": \"csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215\" }, { \"name\": \"csi-vol-25266061-284c-11ed-95e0-0a580a810215\" } ] ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-3536db13-2850-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/pv/name\": \"pvc-43d55ea1-95c0-42c8-8616-4ee70b504445\", \"csi.storage.k8s.io/pvc/name\": \"cephfs-pvc-restore\", \"csi.storage.k8s.io/pvc/namespace\": \"openshift-storage\" }" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html-single/monitoring_openshift_data_foundation/index
12.4. Web Services Modeling
12.4. Web Services Modeling 12.4.1. Create Web Service Action This method is recommended for experienced users for consistent and rapid deployment of Web services designed to query relational sources. It provides detailed control of all Web service interfaces, operations and required transformations from XML Views. To create a Web service model from relational models or objects: Select any combination of relational models, tables and/or procedures in the Model Explorer view tree. Note It is recommended that the user selects single source models, which enables auto-naming of input/output schema and Web service models in Step 3. Right-click select Modeling > Create Web Service action. . Figure 12.28. Create Web Service Action In the Create Web Service dialog, specify file names for the generated Input Schema file, Output Schema file and Web service model. Change options as desired. Click Finish when done. Figure 12.29. Create Web Service Dialog When model generation is complete, a confirmation dialog appears. Click OK . Figure 12.30. Generation Completed Dialog 12.4.2. Web Services War Generation 12.4.2.1. Web Services War Generation Teiid Designer allows you to expose your VDBs via a SOAP or REST interface. JBossWS-CXF or RESTEasy wars can be generated based on models within your VDBs. 12.4.2.2. Generating a SOAP War The Teiid Designer provides web service generation capabilities in the form of a JBossWS-CXF war. Once you have added your Web Service Models, as described in New Web Service View Model section, to your VDB, deployed the VDB to a running JBoss Data Virtualization instance and created your VDB's data source, you are ready to expose the web service using the generated war. To generate a new SOAP war using the VDB: Right-click on the VDB containing your web service model(s) and select the Modeling > Generate SOAP War action. Figure 12.31. Fill in missing properties in Web Service War Generation Wizard shown below. Figure 12.32. Generate a JBossWS-CXF War Web Service Dialog Table 12.7. Field Descriptions Field Name Description Name The name of the generated war file. Host The server host name (or IP). Port The server port. VDB JNDI Name The JNDI connection name to the deployed Teiid source VDB. Security options None - no username/password required to connect to the VDB through the generated web service. HTTP Basic - the specified security realm and role will be used. The default realm value is the realm that comes out of the box with JBoss Data Virtualization (teiid-security). The role needs to be defined in the appropriate security mechanism. In the case of Teiid, use the teiid-security-roles.properties file. When using HTTPBasic, a local Teiid connection using the PassthroughAuthentication property is required. WS-Security - a password callback class will be generated for you which will validate that the username/password values you specified in the war generator dialog are passed in. This is meant to be a testing mechanism for your WS-Security enabled web service and your own security mechanism should be implemented in this class. All source code is included in the generated war along with the compiled class files. Target namespace This is the target namespace that will be used in the generated WSDL and subsequent generated web service classes. MTOM (Message Transmission Optimization Mechanism) If selected, MTOM will be enabled for the web service endpoint(s). You will also need to update your output schema accordingly by adding the xmlns:xmime="http://www.w3.org/2005/05/xmlmim" schema and adding type="xs:base64Binary"xmime:expectedContentTypes="application/octet-stream" to the output element you wish to optimize. War File Save Location The folder where the generated WAR file should be saved. Click OK to generate the web service war. When war generation is complete, a confirmation dialog should appear. Click OK . Figure 12.33. Generation Completed Dialog Click OK . 12.4.2.3. Generating a REST War In Teiid Designer , it is also possible to expose your VDBs over REST using a generated RESTEasy war. Also, if your target virtual model has update, insert and delete SQL defined, you can easily provide CRUD capabilities via REST. Accepted inputs into the generated REST operations are URI path parameters, query parameters, and/or XML/JSON. JSON is exposed over a URI that includes "json". For example, http://{host}:{port}/{war_context}/{model_name}/resource will accept URI path parameters and/or XML while http://{host}:{port}/{war_context}/{model_name}/json/resource will accept URI path parameters and/or JSON. You can specify query parameters in the target REST procedure's URI property using & as a delimiter. For example, REST:URI = authors&parm1&parm2 . In a virtual model, add a procedure(s) that returns an XMLLiteral object. The target of your procedure can be any models in your VDB. Here is an example procedure that selects from a virtual table (VirtualBooks) and returns the results as an XMLLiteral: Figure 12.34. Notice the syntax used to convert the relation table result of the select from VirtualBooks, to an XMLLiteral. Here is an example of an update procedure that will insert a row and return an XMLLiteral object: Figure 12.35. The input format for the REST procedure could be URI parameters, an XML/JSON document, or some combination of both. When using an XML document your root node must be <input> and the XML nodes must correspond to order of the procedure's input parameters. For example, here is the input for the above insert procedure: Figure 12.36. Sample XML Input When using a JSON document, ensure your values match the order of your procedure input parameters as well. Here is the input for the above insert procedure: Figure 12.37. Sample JSON Input To enable REST for specific procedure we need to check this option in Create Relational View Procedure dialog. (When we are creating procedure) Figure 12.38. This will enable six new properties in the property tab for this procedure defined in the model. These properties are defined in the table below: Table 12.8. Extended Properties for RESTful Procedures Property Name Description Rest Method The HTTP method that will determine the REST mapping of this procedure. Supported methods are: GET, PUT, POST and DELETE URI The resource path to the procedure. For example, if you use books/{isbn} as your URI value for a procedure, http://{host}:{port}/{war_context}/{model_name}/books/123 would execute this procedure and pass 123 in as a parameter. CHARSET Optional property for procedures that return a Blob with content type that is text-based. This character set to used to convert the data. There are two supported options: UTF-8 and US-ASCII Content Type Type of content produced by the service. There are four supported types: any text, xml, plain and json. Description The description of this procedure. HEADERS Semi-colon delimited list of HTTP Header parameters to pass into the REST service operation. Example: header1;header2;etc. These values will be passed into the procedure first. Here is what the above example would look like in the Property tab: Figure 12.39. Note that the generated URI will have the model name included as part of the path, so full URL would look like this: http://{host}:{port}/{war_context}/{model_name}/books/123. If you wanted a REST service to return all books, you would write your procedure just as it is above, but remove the input parameter. The URI property would then just be 'books' (or whatever you want) and the URL would be http://{host}:{port}/{war_context}/{model_name}/books. Once you have added all of your procedures along with the required extended properties, be sure and add the model to your VDB or synchronize if it's already included in the VDB. You will then need to redeploy the VDB. Important If you redeploy your VDB during development, you may receive an "Invalid Session Exception"due to a stale connection obtained for the pool. This can be corrected by flushing the data source or, alternatively, you could add a test query to your VDB connection's -ds.xml file. This will insure you get a valid connection after redeploying your VDB. The syntax for the test query is as follows: <check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql>" If you have not already done so, you will need to create a data source for your VDB. This can be done in the Server view of Designer. Right-click on your deployed VDB and select Create Data Source . The Generate REST WAR dialog will ask you for the JNDI name for your created source so that it can connect to your VDB. Right-click on the VDB containing your virtual model(s) with REST eligible procedures and select the Modeling > Generate RESTWar action. If there are no procedures that are REST eligible, the "Generate RESTWar" option will not be enabled. Figure 12.40. Fill in missing properties in the REST War Generation Wizard shown below. Figure 12.41. Generate a REST WAR War File Dialog Table 12.9. Field Descriptions Field Name Description Name The name of the generated war file. Connection JNDI Name The JNDI connection name to the deployed Teiid source VDB. War File Save Location The folder where the generated WAR file should be saved. Include RESTEasy Jars in lib Folder of WAR If selected, the RESTEasy jars and there dependent jars will be included in the lib folder of the generated WAR. If not selected, the jars will not be included. This should be cleared in environments where RESTEasy is installed in the classpath of the server installation to avoid conflicts. None or HTTPBasic Type of security. Two options: None or HTTPBasic security. Realm Defines a protection space. Role Defines role in your secure HTTPBasic system. Click OK to generate the REST war. When war generation is complete, a confirmation dialog appears. Click OK . Figure 12.42. Generation Completed Dialog 12.4.2.4. Deploying Your Generated WAR File Once you have generated your war file, you will need to deploy it to your JBoss Data Virtualization instance. There are a few ways to accomplish this. From JBDS or JBoss Tools Insure target JBoss Data Virtualization is configured and running. Select your WAR file in the Model Explorer view. If you did not generate your war to that location, you can copy and paste it there. Right-click on the WAR file and select Mark as Deployable . This will cause you WAR file to be automatically deployed the JBoss Data Virtualization instance you have defined. Figure 12.43. Using the JBoss Data Virtualization Administration Console Using the administration console that comes with JBoss Data Virtualization, you can deploy WAR files. The administration console is available at http://{hostname}:9990. Once logged on, use the Add button in the Deployments tab. Note that the default port for the administration console is 9990 . Depending on the jboss.socket.binding.port-offset property in the server configuration, the port number might be different. Manual Deployment to JBoss Data Virtualization It is possible to deploy the generated WAR by manually copying the file to the deploy folder of the target JBoss Data Virtualization. If the server is running, the WAR will deploy automatically via hot deploy. Otherwise, the WAR will deploy at the start of the server. 12.4.2.5. Testing Your Generated WAR Files Once you have deployed your war file, you are ready to test it out. There are a few ways to accomplish this. SOAP WAR Testing Determining Your WSDL URL You can get your WSDL URL at http://{server:port}/{warName}/{interfaceName}?wsdl . Also, you can show all web services in the Administration console. Once logged on, click on Runtime tab and choose Webservices from the side menu. REST WAR Testing What is my URI? When you modeled your REST procedures, you assigned a URI for each HTTP Operation you defined along with the corresponding operation (GET, PUT, POST or DELETE). The full path of each URI is defined as /{war_context}/{model_name}/{resource} for XML input/output and /{war_context}/{model_name}/json/{resource} for JSON input/output. Using your REST URL, you can use any testing tool with REST support such as the Web Service Tester included with JBDS and JBoss Tools or an external tool like soapUI or cURL.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/sect-web_services_modeling
Chapter 1. HawtIO release notes
Chapter 1. HawtIO release notes This chapter provides release information about HawtIO Diagnostic Console Guide. 1.1. HawtIO features HawtIO Diagnostic Console is available for general release in the HawtIO Diagnostic Console Guide 4.0.0. HawtIO includes the following main features: Runtime management of JVM via JMX, especially that of Camel applications and AMQ broker, with specialised views Visualisation and debugging/tracing of Camel routes Simple managing and monitoring of application metrics 1.1.1. Release features UI plugins Connect JMX Camel Runtime Logs Quartz Spring Boot UI extension with custom plugins Authentication RBAC BASIC Authentication Spring Security Keycloak HawtIO Operator Managing HawtIO Online instances via HawtIO Custom Resources (CR) Addition of CR through the OpenShift Console; Addition of CR using CLI tools, eg. oc ; Deletion of CR through OpenShift Console or CLI results in removal of all owned HawtIO resources, inc. ConfigMaps, Deployments, ReplicationController etc.; Removal of operator-managed pod or other resource results in replacement being generated; Addition of property or modification of existing property, eg. CPU, Memory or custom configmap, results in new pod being deployed comprising the updated values Installation via Operator Hub Upgrade of operator will take place if the Tech Preview version has been previously installed. The 1.0.0 GA operator will report itself in the catalog as 1.0.1, purely to differentiate itself from the Tech Preview version; Successful installs via either the numbered (2.x) or the latest channels will result in the same operator version and operand being installed; Successful install of the operator through the catalog; Searching for HawtIO in the catalog will return both the product and community versions of the operator. Correct identification of the versions should be obvious. HawtIO Online With no credentials supplied, the application should redirect to the OpenShift authentication page The entering of correct OpenShift-supplied credentials should redirect back to the Discovery page of the application; The entering of incorrect OpenShift-supplied credentials should result in the user being instructed that logging-in cannot be completed; Discovery Only jolokia-enabled pods should be visible either in the same namespace (Namespace mode) or across the cluster (Cluster mode); Pods should display the correct status (up or down) through their status icons; Only those pods that have a working status should be capable of connection (connect button visible); The OpenShift console URL should have been populated by the startup scripts of HawtIO. Therefore, all labels relating to a feature accessible in the OpenShift console should have hyperlinks that open to the respective console target; The OpenShift console should be accessible from a link in the dropdown menu in the head bar of the application; All jolokia-enabled apps should have links listed in the dropdown menu in the head bar of the application; Connection to HawtIO-enabled applications Clicking the Connect button to a pod in the Discovery page should open a new window/tab and 'connect' to the destination app. This should manifest as the HawtIO Online UI showing plugin names vertically down the left sidebar, eg. JMX, and details of the respective focused plugin displayed in the remainder of the page; Failure to connect to a pod should present the user with some kind of error message; Once connected, all features listed in the 'UI Plugins' (above) should be available for testing where applicable to the target application. 1.1.2. HawtIO known issues The following issue remain with HawtIO for this release: HAWNG-147 Fuse web console - support both RH-SSO and Properties login When Keycloak/RH-SSO is configured for web console authentication, the user is automatically redirected to the Keycloak login page. There is no option for the user to attempt local/properties authentication, even if that JAAS module is also configured. HAWNG-698 Camel 4 application's Fuse Console is not loading properly and throwing 'No Selected Container' Camel 4 application's Fuse Console is not loading properly and throwing 'No Selected Container'. Errors returned are (502) Bad Gateway and (504) ERR_INSUFIENT_RESOURCE.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/release_notes_for_hawtio_diagnostic_console_guide/camel-hawtio-release-notes_hawtio
Chapter 11. Using service accounts in applications
Chapter 11. Using service accounts in applications 11.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 11.2. Default service accounts Your OpenShift Container Platform cluster contains default service accounts for cluster management and generates more service accounts for each project. 11.2.1. Default cluster service accounts Several infrastructure controllers run using service account credentials. The following service accounts are created in the OpenShift Container Platform infrastructure project ( openshift-infra ) at server start, and given the following roles cluster-wide: Service Account Description replication-controller Assigned the system:replication-controller role deployment-controller Assigned the system:deployment-controller role build-controller Assigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged security context constraint to create privileged build pods. 11.2.2. Default project service accounts and roles Three service accounts are automatically created in each project: Service Account Usage builder Used by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry. deployer Used by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project. default Used to run all other pods unless they specify a different service account. All service accounts in a project are given the system:image-puller role, which allows pulling images from any imagestream in the project using the internal container image registry. 11.3. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44 11.4. Using a service account's credentials externally You can distribute a service account's token to external applications that must authenticate to the API. To pull an image, the authenticated user must have get rights on the requested imagestreams/layers . To push an image, the authenticated user must have update rights on the requested imagestreams/layers . By default, all service accounts in a project have rights to pull any image in the same project, and the builder service account has rights to push any image in the same project. Procedure View the service account's API token: USD oc describe secret <secret_name> For example: USD oc describe secret robot-token-uzkbh -n top-secret Example output Name: robot-token-uzkbh Labels: <none> Annotations: kubernetes.io/service-account.name=robot,kubernetes.io/service-account.uid=49f19e2e-16c6-11e5-afdc-3c970e4b7ffe Type: kubernetes.io/service-account-token Data token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9... Log in using the token that you obtained: USD oc login --token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9... Example output Logged into "https://server:8443" as "system:serviceaccount:top-secret:robot" using the token provided. You don't have any projects. You can try to create a new project, by running USD oc new-project <projectname> Confirm that you logged in as the service account: USD oc whoami Example output system:serviceaccount:top-secret:robot
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44", "oc describe secret <secret_name>", "oc describe secret robot-token-uzkbh -n top-secret", "Name: robot-token-uzkbh Labels: <none> Annotations: kubernetes.io/service-account.name=robot,kubernetes.io/service-account.uid=49f19e2e-16c6-11e5-afdc-3c970e4b7ffe Type: kubernetes.io/service-account-token Data token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9", "oc login --token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9", "Logged into \"https://server:8443\" as \"system:serviceaccount:top-secret:robot\" using the token provided. You don't have any projects. You can try to create a new project, by running USD oc new-project <projectname>", "oc whoami", "system:serviceaccount:top-secret:robot" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/authentication_and_authorization/using-service-accounts
Chapter 2. Image Registry Operator in OpenShift Container Platform
Chapter 2. Image Registry Operator in OpenShift Container Platform 2.1. Image Registry on cloud platforms and OpenStack The Image Registry Operator installs a single instance of the OpenShift image registry, and manages all registry configuration, including setting up registry storage. Note Storage is only automatically configured when you install an installer-provisioned infrastructure cluster on AWS, Azure, GCP, IBM, or OpenStack. When you install or upgrade an installer-provisioned infrastructure cluster on AWS, Azure, GCP, IBM, or OpenStack, the Image Registry Operator sets the spec.storage.managementState parameter to Managed . If the spec.storage.managementState parameter is set to Unmanaged , the Image Registry Operator takes no action related to storage. After the control plane deploys, the Operator will create a default configs.imageregistry.operator.openshift.io resource instance based on configuration detected in the cluster. If insufficient information is available to define a complete configs.imageregistry.operator.openshift.io resource, the incomplete resource will be defined and the Operator will update the resource status with information about what is missing. The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location as well. All configuration and workload resources for the registry reside in that namespace. Important The Image Registry Operator's behavior for managing the pruner is orthogonal to the managementState specified on the ClusterOperator object for the Image Registry Operator. If the Image Registry Operator is not in the Managed state, the image pruner can still be configured and managed by the Pruning custom resource. However, the managementState of the Image Registry Operator alters the behavior of the deployed image pruner job: Managed : the --prune-registry flag for the image pruner is set to true . Removed : the --prune-registry flag for the image pruner is set to false , meaning it only prunes image metatdata in etcd. 2.2. Image Registry on bare metal, Nutanix, and vSphere 2.2.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . 2.3. Image Registry Operator configuration parameters The configs.imageregistry.operator.openshift.io resource offers the following configuration parameters. Parameter Description managementState Managed : The Operator updates the registry as configuration resources are updated. Unmanaged : The Operator ignores changes to the configuration resources. Removed : The Operator removes the registry instance and tear down any storage that the Operator provisioned. logLevel Sets logLevel of the registry instance. Defaults to Normal . The following values for logLevel are supported: Normal Debug Trace TraceAll httpSecret Value needed by the registry to secure uploads, generated by default. operatorLogLevel The operatorLogLevel configuration parameter provides intent-based logging for the Operator itself and a simple way to manage coarse-grained logging choices that Operators must interpret for themselves. This configuration parameter defaults to Normal . It does not provide fine-grained control. The following values for operatorLogLevel are supported: Normal Debug Trace TraceAll proxy Defines the Proxy to be used when calling master API and upstream registries. storage Storagetype : Details for configuring registry storage, for example S3 bucket coordinates. Normally configured by default. readOnly Indicates whether the registry instance should reject attempts to push new images or delete existing ones. requests API Request Limit details. Controls how many parallel requests a given registry instance will handle before queuing additional requests. defaultRoute Determines whether or not an external route is defined using the default hostname. If enabled, the route uses re-encrypt encryption. Defaults to false . routes Array of additional routes to create. You provide the hostname and certificate for the route. replicas Replica count for the registry. disableRedirect Controls whether to route all data through the registry, rather than redirecting to the back end. Defaults to false . spec.storage.managementState The Image Registry Operator sets the spec.storage.managementState parameter to Managed on new installations or upgrades of clusters using installer-provisioned infrastructure on AWS or Azure. Managed : Determines that the Image Registry Operator manages underlying storage. If the Image Registry Operator's managementState is set to Removed , then the storage is deleted. If the managementState is set to Managed , the Image Registry Operator attempts to apply some default configuration on the underlying storage unit. For example, if set to Managed , the Operator tries to enable encryption on the S3 bucket before making it available to the registry. If you do not want the default settings to be applied on the storage you are providing, make sure the managementState is set to Unmanaged . Unmanaged : Determines that the Image Registry Operator ignores the storage settings. If the Image Registry Operator's managementState is set to Removed , then the storage is not deleted. If you provided an underlying storage unit configuration, such as a bucket or container name, and the spec.storage.managementState is not yet set to any value, then the Image Registry Operator configures it to Unmanaged . 2.4. Enable the Image Registry default route with the Custom Resource Definition In OpenShift Container Platform, the Registry Operator controls the OpenShift image registry feature. The Operator is defined by the configs.imageregistry.operator.openshift.io Custom Resource Definition (CRD). If you need to automatically enable the Image Registry default route, patch the Image Registry Operator CRD. Procedure Patch the Image Registry Operator CRD: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}' 2.5. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 2.6. Configuring storage credentials for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, storage credential configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry 2.7. Additional resources Configuring the registry for AWS user-provisioned infrastructure Configuring the registry for GCP user-provisioned infrastructure Configuring the registry for Azure user-provisioned infrastructure Configuring the registry for bare metal Configuring the registry for vSphere
[ "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/registry/configuring-registry-operator
Chapter 8. Message delivery
Chapter 8. Message delivery 8.1. Handling unacknowledged deliveries Messaging systems use message acknowledgment to track if the goal of sending a message is truly accomplished. When a message is sent, there is a period of time after the message is sent and before it is acknowledged (the message is "in flight"). If the network connection is lost during that time, the status of the message delivery is unknown, and the delivery might require special handling in application code to ensure its completion. The sections below describe the conditions for message delivery when connections fail. Non-transacted producer with an unacknowledged delivery If a message is in flight, it is sent again after reconnect, provided a send timeout is not set and has not elapsed. No user action is required. Transacted producer with an uncommitted transaction If a message is in flight, it is sent again after reconnect. If the send is the first in a new transaction, then sending continues as normal after reconnect. If there are sends in the transaction, then the transaction is considered failed, and any subsequent commit operation throws a TransactionRolledBackException . To ensure delivery, the user must resend any messages belonging to a failed transaction. Transacted producer with a pending commit If a commit is in flight, then the transaction is considered failed, and any subsequent commit operation throws a TransactionRolledBackException . To ensure delivery, the user must resend any messages belonging to a failed transaction. Non-transacted consumer with an unacknowledged delivery If a message is received but not yet acknowledged, then acknowledging the message produces no error but results in no action by the client. Because the received message is not acknowledged, the producer might resend it. To avoid duplicates, the user must filter out duplicate messages by message ID. Transacted consumer with an uncommitted transaction If an active transaction is not yet committed, it is considered failed, and any pending acknowledgments are dropped. Any subsequent commit operation throws a TransactionRolledBackException . The producer might resend the messages belonging to the transaction. To avoid duplicates, the user must filter out duplicate messages by message ID. Transacted consumer with a pending commit If a commit is in flight, then the transaction is considered failed. Any subsequent commit operation throws a TransactionRolledBackException . The producer might resend the messages belonging to the transaction. To avoid duplicates, the user must filter out duplicate messages by message ID. 8.2. Extended session acknowledgment modes The client supports two additional session acknowledgement modes beyond those defined in the JMS specification. Individual acknowledge In this mode, messages must be acknowledged individually by the application using the Message.acknowledge() method used when the session is in CLIENT_ACKNOWLEDGE mode. Unlike with CLIENT_ACKNOWLEDGE mode, only the target message is acknowledged. All other delivered messages remain unacknowledged. The integer value used to activate this mode is 101. connection.createSession(false, 101); No acknowledge In this mode, messages are accepted at the server before being dispatched to the client, and no acknowledgment is performed by the client. The client supports two integer values to activate this mode, 100 and 257. connection.createSession(false, 100);
[ "connection.createSession(false, 101);", "connection.createSession(false, 100);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_jms/2.4/html/using_qpid_jms/message_delivery
Chapter 12. Using a service account as an OAuth client
Chapter 12. Using a service account as an OAuth client 12.1. Service accounts as OAuth clients You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account's own namespace: user:info user:check-access role:<any_role>:<service_account_namespace> role:<any_role>:<service_account_namespace>:! When using a service account as an OAuth client: client_id is system:serviceaccount:<service_account_namespace>:<service_account_name> . client_secret can be any of the API tokens for that service account. For example: USD oc sa get-token <service_account_name> To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true . redirect_uri must match an annotation on the service account. 12.1.1. Redirect URIs for service accounts as OAuth clients Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as: In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example: The first and second postfixes in the above example are used to separate the two valid redirect URIs. In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play. For example: Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } } Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins . Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } } 1 kind refers to the type of the object being referenced. Currently, only route is supported. 2 name refers to the name of the object. The object must be in the same namespace as the service account. 3 group refers to the group of the object. Leave this blank, as the group for a route is the empty string. Both annotation prefixes can be combined to override the data provided by the reference object. For example: The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com , now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows: Type Syntax Scheme "https://" Hostname "//website.com" Port "//:8000" Path "examplepath" Note Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior. Any combination of the above syntax can be combined using the following format: <scheme:>//<hostname><:port>/<path> The same object can be referenced more than once for more flexibility: Assuming that the route named jenkins has an Ingress of https://example.com , then both https://example.com:8000 and https://example.com/custompath are considered valid. Static and dynamic annotations can be used at the same time to achieve the desired behavior:
[ "oc sa get-token <service_account_name>", "serviceaccounts.openshift.io/oauth-redirecturi.<name>", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/using-service-accounts-as-oauth-client
2.7. Creating User Private Groups Automatically Using SSSD
2.7. Creating User Private Groups Automatically Using SSSD An SSSD client directly integrated into AD can automatically create a user private group for every AD user retrieved, ensuring that its GID matches the user's UID unless the GID number is already taken. To avoid conflicts, make sure that no groups with the same GIDs as user UIDs exist on the server. The GID is not stored in AD. This ensures that AD users benefit from group functionality, while the LDAP database does not contain unnecessary empty groups. 2.7.1. Activating the Automatic Creation of User Private Groups for AD users To activate the automatic creation of user private groups for AD users: Edit the /etc/sssd/sssd.conf file, adding in the [domain/LDAP] section: Restart the sssd service, removing the sssd database: After performing this procedure, every AD user has a GID which is identical to the UID: 2.7.2. Deactivating the Automatic Creation of User Private Groups for AD users To deactivate the automatic creation of user private groups for AD users: Edit the /etc/sssd/sssd.conf file, adding in the [domain/LDAP] section: Restart the sssd service, removing the sssd database: After performing this procedure, all AD users have an identical, generic GID:
[ "auto_private_groups = true", "service sssd stop ; rm -rf /var/lib/sss/db/* ; service sssd start", "id ad_user1 uid=121298(ad_user1) gid=121298(ad_user1) groups=121298(ad_user1),10000(Group1) id ad_user2 uid=121299(ad_user2) gid=121299(ad_user2) groups=121299(ad_user2),10000(Group1)", "auto_private_groups = false", "service sssd stop ; rm -rf /var/lib/sss/db/* ; service sssd start", "id ad_user1 uid=121298(ad_user1) gid=10000(group1) groups=10000(Group1) id ad_user2 uid=121299(ad_user2) gid=10000(group1) groups=10000(Group1)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/gid-for-ad-users
Chapter 24. Verifying certificates using IdM Healthcheck
Chapter 24. Verifying certificates using IdM Healthcheck Learn more about understanding and using the Healthcheck tool in Identity management (IdM) to identify issues with IPA certificates maintained by certmonger . For details, see Healthcheck in IdM . 24.1. IdM certificates Healthcheck tests The Healthcheck tool includes several tests for verifying the status of certificates maintained by certmonger in Identity Management (IdM). For details about certmonger, see Obtaining an IdM certificate for a service using certmonger . This suite of tests checks expiration, validation, trust and other issues. Multiple errors may be thrown for the same underlying issue. To see all certificate tests, run the ipa-healthcheck with the --list-sources option: You can find all tests under the ipahealthcheck.ipa.certs source: IPACertmongerExpirationCheck This test checks expirations in certmonger . If an error is reported, the certificate has expired. If a warning appears, the certificate will expire soon. By default, this test applies within 28 days or fewer days before certificate expiration. You can configure the number of days in the /etc/ipahealthcheck/ipahealthcheck.conf file. After opening the file, change the cert_expiration_days option located in the default section. Note Certmonger loads and maintains its own view of the certificate expiration. This check does not validate the on-disk certificate. IPACertfileExpirationCheck This test checks if the certificate file or NSS database cannot be opened. This test also checks expiration. Therefore, carefully read the msg attribute in the error or warning output. The message specifies the problem. Note This test checks the on-disk certificate. If a certificate is missing, unreadable, etc a separate error can also be raised. IPACertNSSTrust This test compares the trust for certificates stored in NSS databases. For the expected tracked certificates in NSS databases the trust is compared to an expected value and an error raised on a non-match. IPANSSChainValidation This test validates the certificate chain of the NSS certificates. The test executes: certutil -V -u V -e -d [dbdir] -n [nickname] IPAOpenSSLChainValidation This test validates the certificate chain of the OpenSSL certificates. To be comparable to the NSSChain validation here is the OpenSSL command we execute: IPARAAgent This test compares the certificate on disk with the equivalent record in LDAP in uid=ipara,ou=People,o=ipaca . IPACertRevocation This test uses certmonger to verify that certificates have not been revoked. Therefore, the test can find issues connected with certificates maintained by certmonger only. IPACertmongerCA This test verifies the certmonger Certificate Authority (CA) configuration. IdM cannot issue certificates without CA. Certmonger maintains a set of CA helpers. In IdM, there is a CA named IPA which issues certificates through IdM, authenticating as a host or user principal, for host or service certs. There are also dogtag-ipa-ca-renew-agent and dogtag-ipa-ca-renew-agent-reuse which renew the CA subsystem certificates. Note Run these tests on all IdM servers when trying to check for issues. 24.2. Screening certificates using the Healthcheck tool Follow this procedure to run a standalone manual test of an Identity Management (IdM) certificate health check using the Healthcheck tool. The Healthcheck tool includes many tests, therefore, you can shorten the results with: Excluding all successful test: --failures-only Including only certificate tests: --source=ipahealthcheck.ipa.certs Prerequisites You must perform Healthcheck tests as the root user. Procedure To run Healthcheck with warnings, errors and critical issues regarding certificates, enter: Successful test displays empty brackets: Failed test shows you the following output: This IPACertfileExpirationCheck test failed on opening the NSS database. Additional resources See man ipa-healthcheck .
[ "ipa-healthcheck --list-sources", "openssl verify -verbose -show_chain -CAfile /etc/ipa/ca.crt [cert file]", "ipa-healthcheck --source=ipahealthcheck.ipa.certs --failures-only", "[]", "{ \"source\": \"ipahealthcheck.ipa.certs\", \"check\": \"IPACertfileExpirationCheck\", \"result\": \"ERROR\", \"kw\": { \"key\": 1234, \"dbdir\": \"/path/to/nssdb\", \"error\": [error], \"msg\": \"Unable to open NSS database '/path/to/nssdb': [error]\" } }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/verifying-certificates-using-idm-healthcheck_managing-certificates-in-idm
Chapter 8. Updated Packages
Chapter 8. Updated Packages 8.1. 389-ds-base 8.1.1. RHBA-2014:1385 - 389-ds-base bug fix and enhancement update Updated 389-ds-base packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The 389 Directory Server is an LDAPv3 compliant server. The base packages include the Lightweight Directory Access Protocol (LDAP) server and command-line utilities for server administration. Bug Fixes BZ# 1001037 When a new user was created on Active Directory (AD) and their password was set, the system administor checked the flag "User must change password on login". Afterwards, the default password was sent to Red Hat Directory Server (RHDS), which set the password but removed the aforementioned flag. With this update, the flag for password change at login persists, and the password sync tool for by-passing the 7-day constraint is allowed if the flag is checked. BZ# 1008021 If an ACI (access control instruction) is configured to give permissions to "self", bound user itself, the result of a granted access for an entry was cached and could erroneously be reused for all entries. Consequently, a bound client could retrieve entries or attributes it was not supposed to, or fail to retrieve those entries and attributes it was supposed to retrieve. With this update, certain accesses are granted per entry, making sure that if a granted access is cached, it is purged for the entry. BZ# 1009122 The multi-master replication protocol keeps a cumulative counter of the relative time offsets between servers. However, prior to this update, if the system time was adjusted by more than one day, the counter became off by more than one day. Consequently, a replication consumer refused to accept changes from the master and the replication process failed. This update adds a new configuration attribute to cn=config - nsslapd-ignore-time-skew, with the default of "off". In addition, an error message is logged warning the system administrator about the time skew issue. Alternatively, if this attribute is set to "on", a replication consumer allows replication to proceed despite the excessive time skew. BZ# 1012699 Previously, when an invalid install script from host name to the server was supplied, a vague error message was returned to the user. This update provides a proper error message to be returned when a setup script encounters an error in the host name. BZ# 1044218 Previously, the size of the directory server was constantly increasing after search requests for simple paged results were processed. The memory leak causing this bug has been fixed, and the server size no longer increases in the aforementioned situation. BZ# 1049029 Prior to this update, Windows Sync Control request returned the renamed member of a group entry only, not the group containing this member. As a consequence, renaming user Distinguished Name (DN) on Active Directory (AD) was not applied to the synced member DN in a group that the user DN belonged to. With this update, once a rename operation is received from AD, Windows Sync Control searches groups having a member value, and replaces the old DN with the renamed DN. In addition, Windows Sync Control also updates the renamed member DN in a group as intended. BZ# 1053766 Previously, when importing an LDAP Data Interchange Format (LDIF) or doing a replication initialization that contained tombstone entries, the parent entry of the tombstone entry had its numsubordinate entry count incorrectly incremented. With this update, the parent entry numsubordinate attribute is not updated when processing a tombstone entry, and numsubordinate value is now accurate in all entries. BZ# 1057805 Previously, calculating the size of an entry in the memory was underestimated: the entry cache size was larger than the specified size in the configuration. This bug has been fixed by calculating each entry size more accurately, which leads to more accurate size of the entry cache. BZ# 1060385 When trying to process an empty log file, the logconv.pl utility failed to run and reported a series of Perl errors. To fix this bug, empty log files are checked and ignored, and logconv.pl reports the empty log file by the following message: BZ# 1070583 While a Total Replication Update or Replica Initialization was occurring, the server could terminate unexpectedly. With this update, the replication plugin is not allowed to terminate while the total update of replica is still running, and the server thus no longer crashes. BZ# 1070720 Prior to this update, using the "-f" filter option caused the rsearch utility to return a filter syntax error. This update makes sure the filter is properly evaluated, and rsearch now works correctly when using the "-f" option. BZ# 1071707 Previously, when a search request for simple paged results was sent to a server and the request was abandoned, the paged result slot in the connection table was not properly released. Consequently, as the slot was not available, the temporary initial slot number "-1" was kept to access an array, which caused its invalid access. With this update, the abandoned slot content is properly deleted for reuse. As a result, the temporary slot number is now replaced with the correct slot number, and invalid array accesses no longer occur. BZ# 1073530 Due to exceeded size limit, Access Control Instruction (ACI) group evaluation failed. However, the "sizelimit" value could be a false value retrieved from a non-search operation. With this update, detected false values are replaced with an unlimited value (-1), and ACI group evaluation no longer fails due to an unexpected sizelimit exceeded error. BZ# 1077895 Performing an LDAP operation using the proxied authentication control could previously lead to server memory leaks. With this update, the allocated memory is released after the operation completion, and the server no longer leaks memory when processing operations using the proxied authentication control. BZ# 1080185 Prior to this update, the tombstone data resurrection did not consider the case in which its parent entry became a conflict entry. In addition, resurrected tombstone data treatment was missing in the entryrdn index. As a consequence, the parent-child relationship became confused when the tombstone data was being resurrected. With this update, the Directory Information Tree (DIT) structure is properly maintained; even if the parent of a tombstone-data entry becomes a conflict entry, the parent-child relationship is now correctly managed. BZ# 1083272 Due to improper use of the valueset_add_valueset() function, which expects only empty values to be passed to it, the server could terminate unexpectedly. This update handles the misuse of the function, which now no longer causes the server to crash. BZ# 1086454 Previously, the logging level was too verbose for the severity of the message, and the errors log could fill up with redundant messages. To fix this bug, the logging has been changed to be written only when "access control list processing" log level is being used, and thus the errors log no longer fills up with harmless warning messages. BZ# 1086903 Previously, if the do_search() function failed at the early phase, the memory storing the given baseDN was not freed. The underlying source code has been fixed, and the baseDN no longer leaks memory even if the search fails at the early phase. BZ# 1086907 Previously, in the entry cache, some delete operations failed with an error when entries were deleted while tombstone purging was in process. This update retries to obtain the parent entry until it succeeds or times out. As a result, delete operations in the entry cache now succeed as intended. BZ# 1092097 Previously, when Multi Master Replication was configured, if an entry was updated on Master 1 and deleted on Master2, the replicated update from Master 1 could target on a deleted entry (a tombstone). This led to two consequences. Firstly, the replicated update failed and could break the replication. Secondly, the tombstone entries differed on Master 1 and Master 2. This update allows updates on a tombstone if the update originates in a replication. Now, replication succeeds and tombstone entries are identical on all servers. BZ# 1097002 When deleting a node entry whose descendants were all deleted, previously, only the first position was checked. Consequently, the child entry at the first position was deleted in the database. However, it could be reused for the replaced tombstone entry, which reported the false error "has children", and thus caused the node deletion to fail. With this update, instead of checking the first position, all child entries are checked whether they are tombstones or not; in case all of them are tombstones, the node is deleted. Now, the false error "has children" is no longer reported, and a node entry whose children are all tombstones is successfully deleted. BZ# 1098653 When a replication is configured, a replication change log database is also a target of the backup. However, backing up a change log database previously failed because there was no back end instance associated with the replication change log database. As a consequence, backing up on a server failed. With this update, if a backing up database is a change log database, the db2bak.pl utility skips checking the back end instance, and backing up thus works as intended. BZ# 1103287 When processing a large amount of access logs without using any verbose options, memory continued to grow until the system was exhausted of available memory, or logs were completely processed. The back-ported feature causing excessive memory consumption has been removed, and memory now remains stable regardless of the amount of logging being processed. BZ# 1103337 Previously, the following message was incorrectly coded as an error level: Consequently, once the server run into the state, this benign error message was logged in the error log repeatedly. To fix this bug, the log level has been changed, and the the message is no longer logged. BZ# 1106917 Prior to this update, when performing a modrdn operation on a managed entry, the managed entry plugin failed to properly update managed entry pointer. The underlying source code has been fixed, and the managed entry link now remains intact on modrdn operations. BZ# 1109333 The MemberOf plugin code assumed the Distinguished Name (DN) value to have the correct syntax, and did not check the normalized value of that DN. This could lead to dereferencing a NULL pointer and unexpected termination. This update checks the normalized value and logs a proper error. As a result, invalid DN no longer causes crashes and errors are properly logged. BZ# 1109335 When adding and deleting entries, the modified parent entry, numsubordinates, could be replaced in the entry cache, even if the operation failed. As a consequence, parent numsubordinate count could be incorrectly updated. This update adds code to unswitch the parent entry in the cache, and parent numsubordinate count is now guaranteed to be correct. BZ# 1109337 Previously, if nested tombstone entries were present, parents were always purged first, and thus their child entries became orphaned. With this update, when doing the tombstone purge, the candidate list is processed in the reverse order, which removes the child entries before the parent entries. As a result, orphaned tombstone entries are no longer left orphaned after purging. BZ# 1109352 Previously, a tombstone purge thread issued a callback search that started reading the id2entry file, even if the back end had already been stopped or disabled. This could cause the server to terminate unexpectedly. Now, when performing a search and returning entries, this update checks if the back end is started before reading id2entry. As a result, even if the tombstone purge occurs while the back end is stopped, the server no longer crashes. BZ# 1109356 Due to various mistakes in the source code, potential memory leaks, corrupted memory, or crashes could occur. All the aforementioned bugs have been addressed, and the server now behaves as expected without crashing or leaking memory. BZ# 1109358 Due to a failure in back end transaction, the post plugin was not properly passed to the back end. As a consequence, the ldapdelete client unexpectedly executed a tombstone deletion. A failure check code has been added with this update, and a tombstone deletion by ldapdelete now fails as expected. BZ# 1109361 Previously, the server enabled the rsa_null_sha cipher, which was not considered secure. With this update, rsa_null_sha is no longer available. BZ# 1109363 Previously, the caller of the slapi_valueset_add_attr_valuearray_ext() function freed the returned Slapi_ValueSet data type improperly upon failure. Consequently, Slapi_ValueSet leaked memory when the attribute adding operation failed. This update adds the code to free the memory, and returned Slapi_ValueSet no longer leaks memory. BZ# 1109373 Prior to this update, syntax plugins were loaded during bootstrapping. However, in that phase, attributes had already been handled. As a consequence, the sorted results of multi-attribute values in schema and Directory Server specific Entries (DSE) became invalid. This update adds a default syntax plugin, and the sorted results of DSE and schema are now in the right order. BZ# 1109377 Previously, environment variables, except from TERM and LANG, were ignored if a program was started using the "service" utility. Consequently, memory fragmentation could not be configured. To fix this bug, mallopt environment variables, "SLAPD_MXFAST", "MALLOC_TRIM_THRESHOLD_" and "MALLOC_MMAP_THRESHOLD_", have been made configurable. Now, memory fragmentation can be controlled explicitly and provide instructions to the "service" utility. BZ# 1109379 Prior to this update, when running a CLEANALLRUV task, the changelog replication incorrectly examined a Change Sequence Number (CSN) which could be deleted and returned as the minimum CSN for a replica. With this update, CSNs that are from a "cleaned" replica ID are ignored, and replication now uses the correct minimum CSN. BZ# 1109381 Previously, a group on Active Directory (AD) had a new member which was not a target of windows sync and existed only on AD. If an operation was executed on AD, the member was replaced with other members which were the targets of the windows sync. Consequently, the new member values were not synchronized. With this update, a modify operation follows including the member value, which is now proceeded by confirming the existence on AD, thus fixing the bug. If a group on Active Directory (AD) and Directory Server (DS) had members which were local, not synchronized, and the members were removed from the group on one side, the delete operation was synchronized and all the members including the local ones were deleted. The underlying source code has been modified to check, firstly, if an attribute is completely deleted on one side, secondly, if each value on the other side is in the sync scope. In addition, the value is now put to the mode for the delete only if the value is in the sync scope. BZ# 1109384 Previously, the manual page for the logconv.pl utility was missing some of the command line options. The manual page has been updated to show the complete usage of logconv.pl with all the available options. BZ# 1109387 Due to a bug in partial restoration, the order of the restored index became confused. With this update, the default compare function is called. Now, after running a partial restoration, indexing problems no longer occur. BZ# 1109443 In processing Class of Service (CoS) definition entry, if the cosTemplateDn entry was not yet given when the cosAttribute entry was being processed, the parent entry Distinguished Name (DN) was set to cosTemplateDn automatically. Consequently, the parent entry could be an ancestor entry of an entry to be updated. In addition, if the entry was a target of the betxn type of plugins, a deadlock occurred. With this update, the parent entry DN is added only when codTemplateDn is not provided. Now, even if cosAttribute and cosTemplateDn are listed in the order in the CoS definition entry and the betxn type plug-ins are enabled, updating an entry no longer causes deadlocks. BZ# 1109952 Previously, if Virtual List View (VLV) search failed with "timelimit" or "adminlimit" server resources, the allocated ID list was not freed. Consequently, when the failure occurred, the memory used for the ID list leaked. This update adds the free code for the error cases, and the memory leaks caused by the VLV failure no longer occur. Enhancement BZ# 985270 Previously, only the root Distinguished Name (DN) accounts were able to specify users that could bypass the password policy settings or add hashed passwords to users. With this update, non-root DN accounts are allowed to perform these types of operations as well. Users of 389-ds-base are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. After installing this update, the 389 server service will be restarted automatically. 8.1.2. RHBA-2014:1623 - 389-ds-base bug fix update Updated 389-ds-base packages that fix one bug are now available for Red Hat Enterprise Linux 6. The 389 Directory Server is an LDAPv3 compliant server. The base packages include the Lightweight Directory Access Protocol (LDAP) server and command-line utilities for server administration. Bug Fix BZ# 1080185 Bug fixes for replication conflict resolution introduced a memory leak bug, which increased the size of the Directory Server. With this update, the memory leak code has been fixed, and the size of the Directory Servers in the replication topology is now stable under the stress. (BZ#1147479) Users of 389-ds-base are advised to upgrade to these updated packages, which fix this bug. After installing this update, the 389 server service will be restarted automatically.
[ "Skipping empty access log, /var/log/dirsrv/slapd-ID/access.", "changelog iteration code returned a dummy entry with csn %s, skipping" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ch08
Chapter 17. Syncing LDAP groups
Chapter 17. Syncing LDAP groups As an administrator, you can use groups to manage users, change their permissions, and enhance collaboration. Your organization may have already created user groups and stored them in an LDAP server. OpenShift Container Platform can sync those LDAP records with internal OpenShift Container Platform records, enabling you to manage your groups in one place. OpenShift Container Platform currently supports group sync with LDAP servers using three common schemas for defining group membership: RFC 2307, Active Directory, and augmented Active Directory. For more information on configuring LDAP, see Configuring an LDAP identity provider . Note You must have cluster-admin privileges to sync groups. 17.1. About configuring LDAP sync Before you can run LDAP sync, you need a sync configuration file. This file contains the following LDAP client configuration details: Configuration for connecting to your LDAP server. Sync configuration options that are dependent on the schema used in your LDAP server. An administrator-defined list of name mappings that maps OpenShift Container Platform group names to groups in your LDAP server. The format of the configuration file depends upon the schema you are using: RFC 2307, Active Directory, or augmented Active Directory. LDAP client configuration The LDAP client configuration section of the configuration defines the connections to your LDAP server. The LDAP client configuration section of the configuration defines the connections to your LDAP server. LDAP client configuration url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: password 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5 1 The connection protocol, IP address of the LDAP server hosting your database, and the port to connect to, formatted as scheme://host:port . 2 Optional distinguished name (DN) to use as the Bind DN. OpenShift Container Platform uses this if elevated privilege is required to retrieve entries for the sync operation. 3 Optional password to use to bind. OpenShift Container Platform uses this if elevated privilege is necessary to retrieve entries for the sync operation. This value may also be provided in an environment variable, external file, or encrypted file. 4 When false , secure LDAP ( ldaps:// ) URLs connect using TLS, and insecure LDAP ( ldap:// ) URLs are upgraded to TLS. When true , no TLS connection is made to the server and you cannot use ldaps:// URL schemes. 5 The certificate bundle to use for validating server certificates for the configured URL. If empty, OpenShift Container Platform uses system-trusted roots. This only applies if insecure is set to false . LDAP query definition Sync configurations consist of LDAP query definitions for the entries that are required for synchronization. The specific definition of an LDAP query depends on the schema used to store membership information in the LDAP server. LDAP query definition baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6 1 The distinguished name (DN) of the branch of the directory where all searches will start from. It is required that you specify the top of your directory tree, but you can also specify a subtree in the directory. 2 The scope of the search. Valid values are base , one , or sub . If this is left undefined, then a scope of sub is assumed. Descriptions of the scope options can be found in the table below. 3 The behavior of the search with respect to aliases in the LDAP tree. Valid values are never , search , base , or always . If this is left undefined, then the default is to always dereference aliases. Descriptions of the dereferencing behaviors can be found in the table below. 4 The time limit allowed for the search by the client, in seconds. A value of 0 imposes no client-side limit. 5 A valid LDAP search filter. If this is left undefined, then the default is (objectClass=*) . 6 The optional maximum size of response pages from the server, measured in LDAP entries. If set to 0 , no size restrictions will be made on pages of responses. Setting paging sizes is necessary when queries return more entries than the client or server allow by default. Table 17.1. LDAP search scope options LDAP search scope Description base Only consider the object specified by the base DN given for the query. one Consider all of the objects on the same level in the tree as the base DN for the query. sub Consider the entire subtree rooted at the base DN given for the query. Table 17.2. LDAP dereferencing behaviors Dereferencing behavior Description never Never dereference any aliases found in the LDAP tree. search Only dereference aliases found while searching. base Only dereference aliases while finding the base object. always Always dereference all aliases found in the LDAP tree. User-defined name mapping A user-defined name mapping explicitly maps the names of OpenShift Container Platform groups to unique identifiers that find groups on your LDAP server. The mapping uses normal YAML syntax. A user-defined mapping can contain an entry for every group in your LDAP server or only a subset of those groups. If there are groups on the LDAP server that do not have a user-defined name mapping, the default behavior during sync is to use the attribute specified as the OpenShift Container Platform group's name. User-defined name mapping groupUIDNameMapping: "cn=group1,ou=groups,dc=example,dc=com": firstgroup "cn=group2,ou=groups,dc=example,dc=com": secondgroup "cn=group3,ou=groups,dc=example,dc=com": thirdgroup 17.1.1. About the RFC 2307 configuration file The RFC 2307 schema requires you to provide an LDAP query definition for both user and group entries, as well as the attributes with which to represent them in the internal OpenShift Container Platform records. For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships: Note If using user-defined name mappings, your configuration file will differ. LDAP sync configuration that uses RFC 2307 schema: rfc2307_config.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 rfc2307: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false 1 The IP address and host of the LDAP server where this group's record is stored. 2 When false , secure LDAP ( ldaps:// ) URLs connect using TLS, and insecure LDAP ( ldap:// ) URLs are upgraded to TLS. When true , no TLS connection is made to the server and you cannot use ldaps:// URL schemes. 3 The attribute that uniquely identifies a group on the LDAP server. You cannot specify groupsQuery filters when using DN for groupUIDAttribute . For fine-grained filtering, use the whitelist / blacklist method. 4 The attribute to use as the name of the group. 5 The attribute on the group that stores the membership information. 6 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 7 The attribute to use as the name of the user in the OpenShift Container Platform group record. 17.1.2. About the Active Directory configuration file The Active Directory schema requires you to provide an LDAP query definition for user entries, as well as the attributes to represent them with in the internal OpenShift Container Platform group records. For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, but define the name of the group by the name of the group on the LDAP server. The following configuration file creates these relationships: LDAP sync configuration that uses Active Directory schema: active_directory_config.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2 1 The attribute to use as the name of the user in the OpenShift Container Platform group record. 2 The attribute on the user that stores the membership information. 17.1.3. About the augmented Active Directory configuration file The augmented Active Directory schema requires you to provide an LDAP query definition for both user entries and group entries, as well as the attributes with which to represent them in the internal OpenShift Container Platform group records. For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships. LDAP sync configuration that uses augmented Active Directory schema: augmented_active_directory_config.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4 1 The attribute that uniquely identifies a group on the LDAP server. You cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 2 The attribute to use as the name of the group. 3 The attribute to use as the name of the user in the OpenShift Container Platform group record. 4 The attribute on the user that stores the membership information. 17.2. Running LDAP sync Once you have created a sync configuration file, you can begin to sync. OpenShift Container Platform allows administrators to perform a number of different sync types with the same server. 17.2.1. Syncing the LDAP server with OpenShift Container Platform You can sync all groups from the LDAP server with OpenShift Container Platform. Prerequisites Create a sync configuration file. Procedure To sync all groups from the LDAP server with OpenShift Container Platform: USD oc adm groups sync --sync-config=config.yaml --confirm Note By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Container Platform group records. 17.2.2. Syncing OpenShift Container Platform groups with the LDAP server You can sync all groups already in OpenShift Container Platform that correspond to groups in the LDAP server specified in the configuration file. Prerequisites Create a sync configuration file. Procedure To sync OpenShift Container Platform groups with the LDAP server: USD oc adm groups sync --type=openshift --sync-config=config.yaml --confirm Note By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Container Platform group records. 17.2.3. Syncing subgroups from the LDAP server with OpenShift Container Platform You can sync a subset of LDAP groups with OpenShift Container Platform using whitelist files, blacklist files, or both. Note You can use any combination of blacklist files, whitelist files, or whitelist literals. Whitelist and blacklist files must contain one unique group identifier per line, and you can include whitelist literals directly in the command itself. These guidelines apply to groups found on LDAP servers as well as groups already present in OpenShift Container Platform. Prerequisites Create a sync configuration file. Procedure To sync a subset of LDAP groups with OpenShift Container Platform, use any the following commands: USD oc adm groups sync --whitelist=<whitelist_file> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync --blacklist=<blacklist_file> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync <group_unique_identifier> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync <group_unique_identifier> \ --whitelist=<whitelist_file> \ --blacklist=<blacklist_file> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync --type=openshift \ --whitelist=<whitelist_file> \ --sync-config=config.yaml \ --confirm Note By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Container Platform group records. 17.3. Running a group pruning job An administrator can also choose to remove groups from OpenShift Container Platform records if the records on the LDAP server that created them are no longer present. The prune job will accept the same sync configuration file and whitelists or blacklists as used for the sync job. For example: USD oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm USD oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm USD oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm 17.4. Automatically syncing LDAP groups You can automatically sync LDAP groups on a periodic basis by configuring a cron job. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an LDAP identity provider (IDP). This procedure assumes that you created an LDAP secret named ldap-secret and a config map named ca-config-map . Procedure Create a project where the cron job will run: USD oc new-project ldap-sync 1 1 This procedure uses a project called ldap-sync . Locate the secret and config map that you created when configuring the LDAP identity provider and copy them to this new project. The secret and config map exist in the openshift-config project and must be copied to the new ldap-sync project. Define a service account: Example ldap-sync-service-account.yaml kind: ServiceAccount apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync Create the service account: USD oc create -f ldap-sync-service-account.yaml Define a cluster role: Example ldap-sync-cluster-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ldap-group-syncer rules: - apiGroups: - '' - user.openshift.io resources: - groups verbs: - get - list - create - update Create the cluster role: USD oc create -f ldap-sync-cluster-role.yaml Define a cluster role binding to bind the cluster role to the service account: Example ldap-sync-cluster-role-binding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ldap-group-syncer subjects: - kind: ServiceAccount name: ldap-group-syncer 1 namespace: ldap-sync roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ldap-group-syncer 2 1 Reference to the service account created earlier in this procedure. 2 Reference to the cluster role created earlier in this procedure. Create the cluster role binding: USD oc create -f ldap-sync-cluster-role-binding.yaml Define a config map that specifies the sync configuration file: Example ldap-sync-config-map.yaml kind: ConfigMap apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync data: sync.yaml: | 1 kind: LDAPSyncConfig apiVersion: v1 url: ldaps://10.0.0.0:389 2 insecure: false bindDN: cn=admin,dc=example,dc=com 3 bindPassword: file: "/etc/secrets/bindPassword" ca: /etc/ldap-ca/ca.crt rfc2307: 4 groupsQuery: baseDN: "ou=groups,dc=example,dc=com" 5 scope: sub filter: "(objectClass=groupOfMembers)" derefAliases: never pageSize: 0 groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: "ou=users,dc=example,dc=com" 6 scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn userNameAttributes: [ uid ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false 1 Define the sync configuration file. 2 Specify the URL. 3 Specify the bindDN . 4 This example uses the RFC2307 schema; adjust values as necessary. You can also use a different schema. 5 Specify the baseDN for groupsQuery . 6 Specify the baseDN for usersQuery . Create the config map: USD oc create -f ldap-sync-config-map.yaml Define a cron job: Example ldap-sync-cron-job.yaml kind: CronJob apiVersion: batch/v1 metadata: name: ldap-group-syncer namespace: ldap-sync spec: 1 schedule: "*/30 * * * *" 2 concurrencyPolicy: Forbid jobTemplate: spec: backoffLimit: 0 ttlSecondsAfterFinished: 1800 3 template: spec: containers: - name: ldap-group-sync image: "registry.redhat.io/openshift4/ose-cli:latest" command: - "/bin/bash" - "-c" - "oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm" 4 volumeMounts: - mountPath: "/etc/config" name: "ldap-sync-volume" - mountPath: "/etc/secrets" name: "ldap-bind-password" - mountPath: "/etc/ldap-ca" name: "ldap-ca" volumes: - name: "ldap-sync-volume" configMap: name: "ldap-group-syncer" - name: "ldap-bind-password" secret: secretName: "ldap-secret" 5 - name: "ldap-ca" configMap: name: "ca-config-map" 6 restartPolicy: "Never" terminationGracePeriodSeconds: 30 activeDeadlineSeconds: 500 dnsPolicy: "ClusterFirst" serviceAccountName: "ldap-group-syncer" 1 Configure the settings for the cron job. See "Creating cron jobs" for more information on cron job settings. 2 The schedule for the job specified in cron format . This example cron job runs every 30 minutes. Adjust the frequency as necessary, making sure to take into account how long the sync takes to run. 3 How long, in seconds, to keep finished jobs. This should match the period of the job schedule in order to clean old failed jobs and prevent unnecessary alerts. For more information, see TTL-after-finished Controller in the Kubernetes documentation. 4 The LDAP sync command for the cron job to run. Passes in the sync configuration file that was defined in the config map. 5 This secret was created when the LDAP IDP was configured. 6 This config map was created when the LDAP IDP was configured. Create the cron job: USD oc create -f ldap-sync-cron-job.yaml Additional resources Configuring an LDAP identity provider Creating cron jobs 17.5. LDAP group sync examples This section contains examples for the RFC 2307, Active Directory, and augmented Active Directory schemas. Note These examples assume that all users are direct members of their respective groups. Specifically, no groups have other groups as members. See the Nested Membership Sync Example for information on how to sync nested groups. 17.5.1. Syncing groups using the RFC 2307 schema For the RFC 2307 schema, the following examples synchronize a group named admins that has two members: Jane and Jim . The examples explain: How the group and users are added to the LDAP server. What the resulting group record in OpenShift Container Platform will be after synchronization. Note These examples assume that all users are direct members of their respective groups. Specifically, no groups have other groups as members. See the Nested Membership Sync Example for information on how to sync nested groups. In the RFC 2307 schema, both users (Jane and Jim) and groups exist on the LDAP server as first-class entries, and group membership is stored in attributes on the group. The following snippet of ldif defines the users and group for this schema: LDAP entries that use RFC 2307 schema: rfc2307.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com 1 The group is a first-class entry in the LDAP server. 2 Members of a group are listed with an identifying reference as attributes on the group. Prerequisites Create the configuration file. Procedure Run the sync with the rfc2307_config.yaml file: USD oc adm groups sync --sync-config=rfc2307_config.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the rfc2307_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as specified by the sync file. 5 The users that are members of the group, named as specified by the sync file. 17.5.2. Syncing groups using the RFC2307 schema with user-defined name mappings When syncing groups with user-defined name mappings, the configuration file changes to contain these mappings as shown below. LDAP sync configuration that uses RFC 2307 schema with user-defined name mappings: rfc2307_config_user_defined.yaml kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: "cn=admins,ou=groups,dc=example,dc=com": Administrators 1 rfc2307: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false 1 The user-defined name mapping. 2 The unique identifier attribute that is used for the keys in the user-defined name mapping. You cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 3 The attribute to name OpenShift Container Platform groups with if their unique identifier is not in the user-defined name mapping. 4 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. Prerequisites Create the configuration file. Procedure Run the sync with the rfc2307_config_user_defined.yaml file: USD oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the rfc2307_config_user_defined.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected] 1 The name of the group as specified by the user-defined name mapping. 17.5.3. Syncing groups using RFC 2307 with user-defined error tolerances By default, if the groups being synced contain members whose entries are outside of the scope defined in the member query, the group sync fails with an error: This often indicates a misconfigured baseDN in the usersQuery field. However, in cases where the baseDN intentionally does not contain some of the members of the group, setting tolerateMemberOutOfScopeErrors: true allows the group sync to continue. Out of scope members will be ignored. Similarly, when the group sync process fails to locate a member for a group, it fails outright with errors: This often indicates a misconfigured usersQuery field. However, in cases where the group contains member entries that are known to be missing, setting tolerateMemberNotFoundErrors: true allows the group sync to continue. Problematic members will be ignored. Warning Enabling error tolerances for the LDAP group sync causes the sync process to ignore problematic member entries. If the LDAP group sync is not configured correctly, this could result in synced OpenShift Container Platform groups missing members. LDAP entries that use RFC 2307 schema with problematic group membership: rfc2307_problematic_users.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2 1 A member that does not exist on the LDAP server. 2 A member that may exist, but is not under the baseDN in the user query for the sync job. To tolerate the errors in the above example, the following additions to your sync configuration file must be made: LDAP sync configuration that uses RFC 2307 schema tolerating errors: rfc2307_config_tolerating.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3 1 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 2 When true , the sync job tolerates groups for which some members were not found, and members whose LDAP entries are not found are ignored. The default behavior for the sync job is to fail if a member of a group is not found. 3 When true , the sync job tolerates groups for which some members are outside the user scope given in the usersQuery base DN, and members outside the member query scope are ignored. The default behavior for the sync job is to fail if a member of a group is out of scope. Prerequisites Create the configuration file. Procedure Run the sync with the rfc2307_config_tolerating.yaml file: USD oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the rfc2307_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected] 1 The users that are members of the group, as specified by the sync file. Members for which lookup encountered tolerated errors are absent. 17.5.4. Syncing groups using the Active Directory schema In the Active Directory schema, both users (Jane and Jim) exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user. The following snippet of ldif defines the users and group for this schema: LDAP entries that use Active Directory schema: active_directory.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins 1 The user's group memberships are listed as attributes on the user, and the group does not exist as an entry on the server. The memberOf attribute does not have to be a literal attribute on the user; in some LDAP servers, it is created during search and returned to the client, but not committed to the database. Prerequisites Create the configuration file. Procedure Run the sync with the active_directory_config.yaml file: USD oc adm groups sync --sync-config=active_directory_config.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the active_directory_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as listed in the LDAP server. 5 The users that are members of the group, named as specified by the sync file. 17.5.5. Syncing groups using the augmented Active Directory schema In the augmented Active Directory schema, both users (Jane and Jim) and groups exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user. The following snippet of ldif defines the users and group for this schema: LDAP entries that use augmented Active Directory schema: augmented_active_directory.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com 1 The user's group memberships are listed as attributes on the user. 2 The group is a first-class entry on the LDAP server. Prerequisites Create the configuration file. Procedure Run the sync with the augmented_active_directory_config.yaml file: USD oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the augmented_active_directory_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as specified by the sync file. 5 The users that are members of the group, named as specified by the sync file. 17.5.5.1. LDAP nested membership sync example Groups in OpenShift Container Platform do not nest. The LDAP server must flatten group membership before the data can be consumed. Microsoft's Active Directory Server supports this feature via the LDAP_MATCHING_RULE_IN_CHAIN rule, which has the OID 1.2.840.113556.1.4.1941 . Furthermore, only explicitly whitelisted groups can be synced when using this matching rule. This section has an example for the augmented Active Directory schema, which synchronizes a group named admins that has one user Jane and one group otheradmins as members. The otheradmins group has one user member: Jim . This example explains: How the group and users are added to the LDAP server. What the LDAP sync configuration file looks like. What the resulting group record in OpenShift Container Platform will be after synchronization. In the augmented Active Directory schema, both users ( Jane and Jim ) and groups exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user or the group. The following snippet of ldif defines the users and groups for this schema: LDAP entries that use augmented Active Directory schema with nested members: augmented_active_directory_nested.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com 1 2 5 The user's and group's memberships are listed as attributes on the object. 3 4 The groups are first-class entries on the LDAP server. 6 The otheradmins group is a member of the admins group. When syncing nested groups with Active Directory, you must provide an LDAP query definition for both user entries and group entries, as well as the attributes with which to represent them in the internal OpenShift Container Platform group records. Furthermore, certain changes are required in this configuration: The oc adm groups sync command must explicitly whitelist groups. The user's groupMembershipAttributes must include "memberOf:1.2.840.113556.1.4.1941:" to comply with the LDAP_MATCHING_RULE_IN_CHAIN rule. The groupUIDAttribute must be set to dn . The groupsQuery : Must not set filter . Must set a valid derefAliases . Should not set baseDN as that value is ignored. Should not set scope as that value is ignored. For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships: LDAP sync configuration that uses augmented Active Directory schema with nested members: augmented_active_directory_config_nested.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ "memberOf:1.2.840.113556.1.4.1941:" ] 5 1 groupsQuery filters cannot be specified. The groupsQuery base DN and scope values are ignored. groupsQuery must set a valid derefAliases . 2 The attribute that uniquely identifies a group on the LDAP server. It must be set to dn . 3 The attribute to use as the name of the group. 4 The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations. 5 The attribute on the user that stores the membership information. Note the use of LDAP_MATCHING_RULE_IN_CHAIN . Prerequisites Create the configuration file. Procedure Run the sync with the augmented_active_directory_config_nested.yaml file: USD oc adm groups sync \ 'cn=admins,ou=groups,dc=example,dc=com' \ --sync-config=augmented_active_directory_config_nested.yaml \ --confirm Note You must explicitly whitelist the cn=admins,ou=groups,dc=example,dc=com group. OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the augmented_active_directory_config_nested.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as specified by the sync file. 5 The users that are members of the group, named as specified by the sync file. Note that members of nested groups are included since the group membership was flattened by the Microsoft Active Directory Server. 17.6. LDAP sync configuration specification The object specification for the configuration file is below. Note that the different schema objects have different fields. For example, v1.ActiveDirectoryConfig has no groupsQuery field whereas v1.RFC2307Config and v1.AugmentedActiveDirectoryConfig both do. Important There is no support for binary attributes. All attribute data coming from the LDAP server must be in the format of a UTF-8 encoded string. For example, never use a binary attribute, such as objectGUID , as an ID attribute. You must use string attributes, such as sAMAccountName or userPrincipalName , instead. 17.6.1. v1.LDAPSyncConfig LDAPSyncConfig holds the necessary configuration options to define an LDAP group sync. Name Description Schema kind String value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#types-kinds string apiVersion Defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#resources string url Host is the scheme, host and port of the LDAP server to connect to: scheme://host:port string bindDN Optional DN to bind to the LDAP server with. string bindPassword Optional password to bind with during the search phase. v1.StringSource insecure If true , indicates the connection should not use TLS. If false , ldaps:// URLs connect using TLS, and ldap:// URLs are upgraded to a TLS connection using StartTLS as specified in https://tools.ietf.org/html/rfc2830 . If you set insecure to true , you cannot use ldaps:// URL schemes. boolean ca Optional trusted certificate authority bundle to use when making requests to the server. If empty, the default system roots are used. string groupUIDNameMapping Optional direct mapping of LDAP group UIDs to OpenShift Container Platform group names. object rfc2307 Holds the configuration for extracting data from an LDAP server set up in a fashion similar to RFC2307: first-class group and user entries, with group membership determined by a multi-valued attribute on the group entry listing its members. v1.RFC2307Config activeDirectory Holds the configuration for extracting data from an LDAP server set up in a fashion similar to that used in Active Directory: first-class user entries, with group membership determined by a multi-valued attribute on members listing groups they are a member of. v1.ActiveDirectoryConfig augmentedActiveDirectory Holds the configuration for extracting data from an LDAP server set up in a fashion similar to that used in Active Directory as described above, with one addition: first-class group entries exist and are used to hold metadata but not group membership. v1.AugmentedActiveDirectoryConfig 17.6.2. v1.StringSource StringSource allows specifying a string inline, or externally via environment variable or file. When it contains only a string value, it marshals to a simple JSON string. Name Description Schema value Specifies the cleartext value, or an encrypted value if keyFile is specified. string env Specifies an environment variable containing the cleartext value, or an encrypted value if the keyFile is specified. string file References a file containing the cleartext value, or an encrypted value if a keyFile is specified. string keyFile References a file containing the key to use to decrypt the value. string 17.6.3. v1.LDAPQuery LDAPQuery holds the options necessary to build an LDAP query. Name Description Schema baseDN DN of the branch of the directory where all searches should start from. string scope The optional scope of the search. Can be base : only the base object, one : all objects on the base level, sub : the entire subtree. Defaults to sub if not set. string derefAliases The optional behavior of the search with regards to aliases. Can be never : never dereference aliases, search : only dereference in searching, base : only dereference in finding the base object, always : always dereference. Defaults to always if not set. string timeout Holds the limit of time in seconds that any request to the server can remain outstanding before the wait for a response is given up. If this is 0 , no client-side limit is imposed. integer filter A valid LDAP search filter that retrieves all relevant entries from the LDAP server with the base DN. string pageSize Maximum preferred page size, measured in LDAP entries. A page size of 0 means no paging will be done. integer 17.6.4. v1.RFC2307Config RFC2307Config holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the RFC2307 schema. Name Description Schema groupsQuery Holds the template for an LDAP query that returns group entries. v1.LDAPQuery groupUIDAttribute Defines which attribute on an LDAP group entry will be interpreted as its unique identifier. ( ldapGroupUID ) string groupNameAttributes Defines which attributes on an LDAP group entry will be interpreted as its name to use for an OpenShift Container Platform group. string array groupMembershipAttributes Defines which attributes on an LDAP group entry will be interpreted as its members. The values contained in those attributes must be queryable by your UserUIDAttribute . string array usersQuery Holds the template for an LDAP query that returns user entries. v1.LDAPQuery userUIDAttribute Defines which attribute on an LDAP user entry will be interpreted as its unique identifier. It must correspond to values that will be found from the GroupMembershipAttributes . string userNameAttributes Defines which attributes on an LDAP user entry will be used, in order, as its OpenShift Container Platform user name. The first attribute with a non-empty value is used. This should match your PreferredUsername setting for your LDAPPasswordIdentityProvider . The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations. string array tolerateMemberNotFoundErrors Determines the behavior of the LDAP sync job when missing user entries are encountered. If true , an LDAP query for users that does not find any will be tolerated and an only and error will be logged. If false , the LDAP sync job will fail if a query for users doesn't find any. The default value is false . Misconfigured LDAP sync jobs with this flag set to true can cause group membership to be removed, so it is recommended to use this flag with caution. boolean tolerateMemberOutOfScopeErrors Determines the behavior of the LDAP sync job when out-of-scope user entries are encountered. If true , an LDAP query for a user that falls outside of the base DN given for the all user query will be tolerated and only an error will be logged. If false , the LDAP sync job will fail if a user query would search outside of the base DN specified by the all user query. Misconfigured LDAP sync jobs with this flag set to true can result in groups missing users, so it is recommended to use this flag with caution. boolean 17.6.5. v1.ActiveDirectoryConfig ActiveDirectoryConfig holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the Active Directory schema. Name Description Schema usersQuery Holds the template for an LDAP query that returns user entries. v1.LDAPQuery userNameAttributes Defines which attributes on an LDAP user entry will be interpreted as its OpenShift Container Platform user name. The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations. string array groupMembershipAttributes Defines which attributes on an LDAP user entry will be interpreted as the groups it is a member of. string array 17.6.6. v1.AugmentedActiveDirectoryConfig AugmentedActiveDirectoryConfig holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the augmented Active Directory schema. Name Description Schema usersQuery Holds the template for an LDAP query that returns user entries. v1.LDAPQuery userNameAttributes Defines which attributes on an LDAP user entry will be interpreted as its OpenShift Container Platform user name. The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations. string array groupMembershipAttributes Defines which attributes on an LDAP user entry will be interpreted as the groups it is a member of. string array groupsQuery Holds the template for an LDAP query that returns group entries. v1.LDAPQuery groupUIDAttribute Defines which attribute on an LDAP group entry will be interpreted as its unique identifier. ( ldapGroupUID ) string groupNameAttributes Defines which attributes on an LDAP group entry will be interpreted as its name to use for an OpenShift Container Platform group. string array
[ "url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: password 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5", "baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6", "groupUIDNameMapping: \"cn=group1,ou=groups,dc=example,dc=com\": firstgroup \"cn=group2,ou=groups,dc=example,dc=com\": secondgroup \"cn=group3,ou=groups,dc=example,dc=com\": thirdgroup", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4", "oc adm groups sync --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --sync-config=config.yaml --confirm", "oc adm groups sync --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --whitelist=<whitelist_file> --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc new-project ldap-sync 1", "kind: ServiceAccount apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync", "oc create -f ldap-sync-service-account.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ldap-group-syncer rules: - apiGroups: - '' - user.openshift.io resources: - groups verbs: - get - list - create - update", "oc create -f ldap-sync-cluster-role.yaml", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ldap-group-syncer subjects: - kind: ServiceAccount name: ldap-group-syncer 1 namespace: ldap-sync roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ldap-group-syncer 2", "oc create -f ldap-sync-cluster-role-binding.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync data: sync.yaml: | 1 kind: LDAPSyncConfig apiVersion: v1 url: ldaps://10.0.0.0:389 2 insecure: false bindDN: cn=admin,dc=example,dc=com 3 bindPassword: file: \"/etc/secrets/bindPassword\" ca: /etc/ldap-ca/ca.crt rfc2307: 4 groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" 5 scope: sub filter: \"(objectClass=groupOfMembers)\" derefAliases: never pageSize: 0 groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" 6 scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn userNameAttributes: [ uid ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "oc create -f ldap-sync-config-map.yaml", "kind: CronJob apiVersion: batch/v1 metadata: name: ldap-group-syncer namespace: ldap-sync spec: 1 schedule: \"*/30 * * * *\" 2 concurrencyPolicy: Forbid jobTemplate: spec: backoffLimit: 0 ttlSecondsAfterFinished: 1800 3 template: spec: containers: - name: ldap-group-sync image: \"registry.redhat.io/openshift4/ose-cli:latest\" command: - \"/bin/bash\" - \"-c\" - \"oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm\" 4 volumeMounts: - mountPath: \"/etc/config\" name: \"ldap-sync-volume\" - mountPath: \"/etc/secrets\" name: \"ldap-bind-password\" - mountPath: \"/etc/ldap-ca\" name: \"ldap-ca\" volumes: - name: \"ldap-sync-volume\" configMap: name: \"ldap-group-syncer\" - name: \"ldap-bind-password\" secret: secretName: \"ldap-secret\" 5 - name: \"ldap-ca\" configMap: name: \"ca-config-map\" 6 restartPolicy: \"Never\" terminationGracePeriodSeconds: 30 activeDeadlineSeconds: 500 dnsPolicy: \"ClusterFirst\" serviceAccountName: \"ldap-group-syncer\"", "oc create -f ldap-sync-cron-job.yaml", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=rfc2307_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: \"cn=admins,ou=groups,dc=example,dc=com\": Administrators 1 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected]", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with dn=\"<user-dn>\" would search outside of the base dn specified (dn=\"<base-dn>\")\".", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" refers to a non-existent entry\". Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" and filter \"<filter>\" did not return any results\".", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3", "oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins", "oc adm groups sync --sync-config=active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ \"memberOf:1.2.840.113556.1.4.1941:\" ] 5", "oc adm groups sync 'cn=admins,ou=groups,dc=example,dc=com' --sync-config=augmented_active_directory_config_nested.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/authentication_and_authorization/ldap-syncing
13.3.2. PHP4, LDAP, and the Apache HTTP Server
13.3.2. PHP4, LDAP, and the Apache HTTP Server Red Hat Enterprise Linux includes a package containing an LDAP module for the PHP server-side scripting language. The php-ldap package adds LDAP support to the PHP4 HTML-embedded scripting language via the /usr/lib/php4/ldap.so module. This module allows PHP4 scripts to access information stored in an LDAP directory. Red Hat Enterprise Linux ships with the mod_authz_ldap module for the Apache HTTP Server. This module uses the short form of the distinguished name for a subject and the issuer of the client SSL certificate to determine the distinguished name of the user within an LDAP directory. It is also capable of authorizing users based on attributes of that user's LDAP directory entry, determining access to assets based on the user and group privileges of the asset, and denying access for users with expired passwords. The mod_ssl module is required when using the mod_authz_ldap module. Important The mod_authz_ldap module does not authenticate a user to an LDAP directory using an encrypted password hash. This functionality is provided by the experimental mod_auth_ldap module, which is not included with Red Hat Enterprise Linux. Refer to the Apache Software Foundation website online at http://www.apache.org/ for details on the status of this module.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-ldap-other-apps
Chapter 51. Bonita Component
Chapter 51. Bonita Component Available as of Camel version 2.19 Used for communicating with a remote Bonita BPM process engine. 51.1. URI format bonita://[operation]?[options] Where operation is the specific action to perform on Bonita. 51.2. General Options The Bonita component has no options. The Bonita endpoint is configured using URI syntax: with the following path and query parameters: 51.2.1. Path Parameters (1 parameters): Name Description Default Type operation Required Operation to use BonitaOperation 51.2.2. Query Parameters (9 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean hostname (consumer) Hostname where Bonita engine runs localhost String port (consumer) Port of the server hosting Bonita engine 8080 String processName (consumer) Name of the process involved in the operation String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean password (security) Password to authenticate to Bonita engine. String username (security) Username to authenticate to Bonita engine. String 51.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.bonita.enabled Enable bonita component true Boolean camel.component.bonita.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 51.4. Body content For the startCase operation, the input variables are retrieved from the body message. This one has to contains a Map<String,Serializable>. 51.5. Examples The following example start a new case in Bonita: from("direct:start").to("bonita:startCase?hostname=localhost&amp;port=8080&amp;processName=TestProcess&amp;username=install&amp;password=install") 51.6. Dependencies To use Bonita in your Camel routes you need to add a dependency on camel-bonita , which implements the component. If you use Maven you can just add the following to your pom.xml, substituting the version number for the latest and greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-bonita</artifactId> <version>x.x.x</version> </dependency>
[ "bonita://[operation]?[options]", "bonita:operation", "from(\"direct:start\").to(\"bonita:startCase?hostname=localhost&amp;port=8080&amp;processName=TestProcess&amp;username=install&amp;password=install\")", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-bonita</artifactId> <version>x.x.x</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/bonita-component
2.4. Monitoring the Local Disk for Graceful Shutdown
2.4. Monitoring the Local Disk for Graceful Shutdown When the disk space available on a system becomes too small, the Directory Server process terminates. As a consequence, there is a risk of corrupting the database or loosing data. To prevent this problem, you can configure Directory Server to monitor the free disk space. The monitoring thread checks the free space on the file systems that contain the configuration, transaction log, and database directories. Depending on the remaining free disk space, Directory Server behaves different: If the free disk space reaches the defined threshold, Directory Server: Disables verbose logging Disables access access logging Deletes archived log files Note Directory Server always continues writing error logs, even if the threshold is reached. If the free disk space is lower than the half of the configured threshold, Directory Server shuts down within a defined grace period. If the available disk space is ever lower than 4 KB, Directory Server shuts down immediately. If disk space is freed up, then Directory Server aborts the shutdown process and re-enables all of the previously disabled log settings. 2.4.1. Configuring Local Disk Monitoring Using the Command Line To configure local disk monitoring using the command line: Enable the disk monitoring feature, set a threshold value, and a grace period: This command sets the threshold of free disk space to 3 GB and the grace period to 60 seconds. Optionally, configure that Directory Server neither disables access logging nor deletes archived logs, by enabling the nsslapd-disk-monitoring-logging-critical parameter: Restart the Directory Server instance: 2.4.2. Configuring Local Disk Monitoring Using the Web Console To configure local disk monitoring using the Web Console: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. Open the Server Settings menu, and select Server Configuration . Enable Enable Disk Space Monitoring , and set the threshold in bytes and the grace period in minutes. This example sets the monitoring threshold to 3 GB (3,221,225,472 bytes) and the time before Directory Server shuts down the instance after reaching the threshold to 60 minutes. Optionally, configure that Directory Server neither disables access logging nor deletes archived logs by selecting Preserve Logs . Click Save Configuration . Click the Actions button, and select Restart Instance .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-disk-monitoring=on nsslapd-disk-monitoring-threshold=3000000000 nsslapd-disk-monitoring-grace-period=60", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-disk-monitoring-logging-critical=on", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/diskmonitoring
5.4.2.4. Removing the Old Physical Volume from the Volume Group
5.4.2.4. Removing the Old Physical Volume from the Volume Group After you have moved the data off /dev/sdb1 , you can remove it from the volume group. You can now reallocate the disk to another volume group or remove the disk from the system.
[ "vgreduce myvg /dev/sdb1 Removed \"/dev/sdb1\" from volume group \"myvg\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/remove_pv_ex4
Chapter 10. Groups
Chapter 10. Groups Groups in Red Hat Single Sign-On allow you to manage a common set of attributes and role mappings for a set of users. Users can be members of zero or more groups. Users inherit the attributes and role mappings assigned to each group. To manage groups go to the Groups left menu item. Groups Groups are hierarchical. A group can have many subgroups, but a group can only have one parent. Subgroups inherit the attributes and role mappings from the parent. This applies to the user as well. So, if you have a parent group and a child group and a user that only belongs to the child group, the user inherits the attributes and role mappings of both the parent and child. In this example, we have a top level Sales group and a child North America subgroup. To add a group, click on the parent you want to add a new child to and click New button. Select the Groups icon in the tree to make a top-level group. Entering in a group name in the Create Group screen and hitting Save will bring you to the individual group management page. Group The Attributes and Role Mappings tab work exactly as the tabs with similar names under a user. Any attributes and role mappings you define will be inherited by the groups and users that are members of this group. To add a user to a group you need to go all the way back to the user detail page and click on the Groups tab there. User Groups Select a group from the Available Groups tree and hit the join button to add the user to a group. Vice versa to remove a group. Here we've added the user Jim to the North America sales group. If you go back to the detail page for that group and select the Membership tab, Jim is now displayed there. Group Membership 10.1. Groups vs. Roles In the IT world the concepts of Group and Role are often blurred and interchangeable. In Red Hat Single Sign-On, Groups are just a collection of users that you can apply roles and attributes to in one place. Roles define a type of user and applications assign permission and access control to roles. Aren't Composite Roles also similar to Groups? Logically they provide the same exact functionality, but the difference is conceptual. Composite roles should be used to apply the permission model to your set of services and applications. Groups should focus on collections of users and their roles in your organization. Use groups to manage users. Use composite roles to manage applications and services. 10.2. Default Groups Default groups allow you to automatically assign group membership whenever any new user is created or imported through Identity Brokering . To specify default groups go to the Groups left menu item, and click the Default Groups tab. Default Groups
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_administration_guide/groups
Release Notes
Release Notes Red Hat Enterprise Linux Atomic Host 7 Release Notes Red Hat Atomic Host Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/index